Computers and music.

Computer technology exerts a powerful and ever-increasing influence on the world in which we live. The personal computer in particular holds the key to a wealth of processing possibilities that could scarcely have been envisaged less than a generation ago. In terms of music applications the sheer diversity of digital functions makes it increasingly hard to present a balanced perspective within a brief dictionary article. In the following sections distinction will be made between applications that have an essentially passive role in the communication of music information, such as the conventional audio compact disc, and those such as the CD-ROM which involve a more conscious process of musical interaction.

I. Introduction

II. Composition

III. Music theory and analysis

IV. Historical research

V. Ethnomusicology research

VI. Music publishing

VII. Music education

VIII. Psychology research

BIBLIOGRAPHY

PETER MANNING (I–III), ELEANOR SELFRIDGE-FIELD (IV, VI), SUZEL ANA REILY (V), ANTHONY POPLE (VII, VIII)

Computers and music

I. Introduction

The term ‘computer’ is normally reserved for a processing system that satisfies certain minimum functional requirements. Specifically, the central processing unit must be able to process alphanumeric information (text and numbers) in some standard form of digital coding, to communicate directly with a memory bank of sufficient capacity to hold both a program and also its immediate data, to support the ordered use of both arithmetic and logic instructions, and to service links to the outside world for the input and output of information as well as devices which may be directly attached to the computer in order to enhance the operation of the system as a whole, for example a disc-based data storage unit.

Two important considerations have to be addressed in this context: what kind of musical functions are amenable to the processes of digital computation, and how is it possible to convert all the various forms of music data that may be encountered into a machine-readable form? Computers have been used for all manner of applications, from the synthesis of new sounds and the analysis of music in notated form to desktop music publishing and studies in music psychology; from analysing the ways in which we respond to musical stimuli to the processes of music performance itself. One constantly recurring issue is the nature of the relationships between a scientific tool that operates entirely within a framework of predetermined functions, and a range of human activities that in many instances reflect some of the most accomplished feats of human creativity.

There are indeed pitfalls for the unwary, but it is important to remember that the quality of the results obtained from computer systems is entirely dependent on the programming and engineering skills of those who design and operate them. As the discipline matures, so does our understanding of what may be possible in the future. Although the pace of technological change since the previous edition of this dictionary appeared in 1980 has been quite remarkable, there are good reasons to suppose that the next few decades are unlikely to prove so capricious. Whereas the main thrust of developments has hitherto been closely tied to increasing the raw power and accessibility of computers, their capacity to perform complex mathematical and engineering operations is no longer a primary issue. The key to real progress now lies almost exclusively in our capacity to apply such resources for musically useful ends.

Of all the creative arts music provides arguably the most significant challenges to those who seek to translate its characteristics into a machine-readable form. This has involved the design of a number of non-standard computer interfaces and the development of an extensive range of special coding techniques. The need for such tools has decreased largely because of the upsurge of general interest in multimedia applications. Whereas in the early 1980s computers both large and small lacked facilities for audio input and output, and at best offered only rudimentary graphics tools, the modern personal computer provides sophisticated colour graphics resources, and high-quality audio facilities have become the rule rather than the exception. Although advanced research applications are still for the most part best left to the specialist composer, performer or musicologist, a number of the techniques described below are readily accessible to the home computer user with musical interests, amateur or professional.

Computers and music

II. Composition

1. Early efforts.

2. Principles of digital audio.

3. Sound synthesis and processing.

4. Systems applications.

Computers and music, §II: Composition

1. Early efforts.

From modest beginnings as a highly specialized area of creative research, for the most part isolated on the margins of post-World War II developments in electronic music, the technology of computer music has advanced to the point where hardly a single aspect of this medium remains untouched by its influence. Analogue devices are progressively being replaced by digital equivalents throughout the entire communications industry. In the case of the music synthesizer and its derivatives, such design changes transformed the industry in less than a decade, the process of conversion being all but complete by the early 1990s. In addition, the increasingly powerful processing capabilities of computers have stimulated the exploration of new horizons in musical composition, from the initial formulation of creative ideas to the production of finished works.

The use of the computer as a tool for composition goes back almost to the dawn of commercial computing. In 1955 Lejaren Hiller and Leonard Isaacson investigated the use of mathematical routines to generate music information at the University of Illinois at Champaign-Urbana. Probability routines, inspired by Hiller’s earlier work as a chemical engineer, provided the basis for a series of composing programs that generated music data in the form of an alphanumeric code, subsequently transcribed by hand into a conventional music score. Less than a year later, in Europe, Xenakis started work on his own series of composing programs based on theories of probability known as ‘stochastics’, which similarly generated music data as alphanumeric code. The desire to combine the processes of score generation and acoustic realization led him in due course to develop a fully integrated system that eliminated the intermediate transcription stage, the music data passing directly to a synthesizer for electronic reproduction.

The techniques of digital sound synthesis, whereby the processes of audio generation itself are directly consigned to the computer, also date back to the 1950s, most notably to the pioneering work of Max Mathews at the Bell Telephone Laboratories in Murray Hill, New Jersey. In 1957 he began work on a series of experimental programs which with the support of other researchers have been developed into an extended generic family of programs known collectively as the musicn series (e.g. music4bf, music5, music11). With the increasing power and accessibility of computers in recent years, such software-based methods of music synthesis have gained significantly in popularity. Modern musicn derivatives such as csound, developed by Barry Vercoe at MIT, are available in versions adapted to a variety of computers from sophisticated work stations to personal computers.

Computers and music, §II: Composition

2. Principles of digital audio.

In order to appreciate how such synthesis tools can be used for creative purposes, it is necessary to understand some basic principles of digital audio. All methods of digital recording, processing and synthesis are ultimately concerned with the representation of acoustical functions or pressure waves as a regular succession of discrete numerical approximations known as samples (see illustration).

The reproduction of a digital sound file requires the services of a digital-to-analogue converter which sequentially translates these sample values into an equivalent series of voltage steps. These are then amplified and passed to a conventional loudspeaker for acoustic conversion. Such procedures are regularly encountered in a domestic environment whenever one listens to a conventional compact disc or the sound output from a CD-ROM. In the digital recording of acoustic material acoustic signals are captured by a conventional microphone to produce an equivalent voltage function. This in turn is passed to an analogue-to-digital converter which continuously samples its instantaneous value to produce a regular series of numerical approximations.

Two factors constrain the fidelity that can be achieved by a digital audio system. The first, the rate at which the individual samples are recorded or generated, determines the absolute range of audio frequencies that can be reproduced. As a simple rule of thumb, the upper frequency limit, known as the Nyquist frequency, is numerically equivalent to half the sampling rate; thus a system recording or reproducing an acoustic function at 20,000 samples per second can achieve a maximum bandwidth of only 10kHz. In practice, the usable bandwidth is limited to about 90% of the theoretical maximum to allow the smooth application of special filters that ensure that any spurious high-frequency components that may be generated at or above the Nyquist frequency are eliminated. In the early days of computer sound synthesis, technical constraints often severely limited the use of higher-order sampling rates, with the result that the available bandwidths were often inadequate for good-quality music reproduction. Modern multimedia computers are capable of handling sound information at professional audio sampling rates, typically 44,100 or 48,000 samples per second, thus allowing the entire frequency range of the human ear (about 17–20 kHz, depending on age) to be accurately reproduced. Older systems, however, are often restricted to much lower sampling rates, which are generally adequate only for speech applications.

The other factor determining fidelity is the numerical accuracy or quantization of the samples themselves. A number of technical expedients have been developed to improve the basic performance of conventional analogue-to-digital and digital-to-analogue converters. However, these devices are constrained by the numerical accuracy of each individual sample, which in turn is determined by the number of binary bits available to code each value as an integer. This requirement to use finite approximations raises the possibility of numerical coding errors which in turn degrade the quality of the resulting sound. 16-bit converters, which allow quantization errors to be restricted to a tiny fraction of 1% (about 15 parts in a million), represent the minimum acceptable standard for good-quality music audio. Converters with a reduced resolution of just eight bits per sample are becoming increasingly rare.

If a digital synthesis system is to work in real time while generating acoustic functions at 44,100 or 48,000 samples per second (or twice this rate in the case of a stereo system where samples for each sound channel have to be generated separately), all the background calculations necessary to determine each sample value will have to be completed within the tiny fraction of a second that separates one sample from its successor. Although many modern computers can meet such demanding operational criteria even for quite complex synthesis tasks, until the late 1980s such resources were rare, even at an institutional level. As a result many well-established software synthesis programs, including the musicn series and its derivatives, were designed in the first instance to support a non-real-time mode of operation. Here a delay is deliberately built into the synthesis process such that the computer is allowed to calculate all the samples for a complete musical passage over whatever period of time actually proves necessary. The samples are stored in correct sequence on a computer disc, and once this sound file has been computed in its entirety the samples are recovered and sent to the digital-to-analogue converter for conversion and reproduction. In the early days of computer music the delays between the start of the calculation process and final audition of the results were often considerable, forcing composers to take a highly empirical approach to the composition process. As computing power increased, these delays dropped from a matter of hours to minutes or even seconds, thus leading finally to the possibility of live synthesis, where the program is able to calculate the samples fast enough for direct output.

Computers and music, §II: Composition

3. Sound synthesis and processing.

Fundamental to most software synthesis systems is the provision of a basic library of functions that may be used as the building-blocks for a particular sequence of synthesis operations. Many of these functions simulate the hardware components of a traditional analogue studio, such as oscillators, filters, modulators and reverberators, although an increasing number of more specialist functions have been developed over the years to model particular instrumental characteristics, such as the excitation of the human voice-box or the vibration of a string. In the case of musicn programs, each integral grouping of these components is identified as an ‘instrument’, broadly analogous to the individual instruments of a traditional orchestra. These ‘instruments’ collectively form an ‘orchestra’, ready to receive performance data from an associated ‘score’.

Since these instruments are simulations that are no more than ordered statements of computer code, the opportunities for varying their design and application are extensive. The only real constraints are general ones imposed by the computing environment itself, for example the maximum number of instrumental components that can be accommodated in the memory at any one time, and the overall processing performance of the system. It is possible, for example, to synthesize finely crafted textures by directly specifying the evolution of each spectral component in terms of its frequency, amplitude and duration. Such a strategy involves considerable quantities of score data and the simultaneous use of a number of instruments, one for each component. Alternatively, highly complex instruments can be constructed with the capacity to generate complete musical gestures in response to a simple set of initial score commands.

Although software synthesis methods are not nearly as well known to the music community at large as the custom-designed hardware systems that predominate in the commercial sector, their significance should not be underestimated, given the steadily increasing power and availability of the personal computer. With the rapid development of information systems such as the Internet, an increasing number of powerful synthesis programs can be located and downloaded for local use by means of a simple modem and telephone link. Since many of these facilities are being made available at little or no charge, their impact on future activities, professional and amateur, is likely to be considerable.

The origins of the all-digital synthesizer, like those of the personal computer, date back to the 1970s and the invention of the microprocessor. The fabrication of a complete computer on a silicon chip led to the development of new types of processors designed for all manner of applications, including digital synthesis and signal processing. This prospect was specially attractive to commercial manufacturers, since the superior performance of custom-designed hardware opened up possibilities of live synthesis from digital circuits which in many instances required less physical space and were ultimately cheaper and more reliable than their analogue counterparts. Developments in this context were further stimulated by the introduction of the Musical Instrument Digital Interface (MIDI) in 1983 as a universal standard for transferring performance information in a digitally coded form between different items of equipment such as music keyboards, synthesizers and audio processors (see MIDI). It quickly became apparent that major composition and performance possibilities could be opened up by extending MIDI control facilities to personal computers.

What has distinguished the commercial MIDI synthesizers from all-software synthesis methods such as those described above is the set of functional characteristics associated with each design. One of the earliest all-digital synthesizers, the Yamaha DX7, which appeared in the same year as MIDI, relies exclusively on the techniques of frequency modulation for its entire repertory of sounds. These techniques are based on research originally carried out by John Chowning at Stanford University in the 1970s using a musicn software synthesis program. The use of a custom-designed processor facilitated the registration of patents that forced other manufacturers to develop rival hardware architectures, each associated with a unique set of synthesis characteristics. Methods employed have ranged from additive synthesis, where composite sounds are assembled from individual frequency components, to phase distortion techniques that seek to modify the spectra of synthesized material during the initial process of generation. The latter shares some features with FM techniques, where one wave-form is used to modulate the functional characteristics of another.

The synthesis of sounds from first principles is subject to a number of constraints. Although particularly evident in cases where hardware features limit the choice and use of synthesis methods, such difficulties are also encountered in software-based environments, even those that permit skilled users to write their own synthesis routines from first principles rather than relying on library functions provided with the program. The root of the problem lies in the character of many natural sounds which can prove exceedingly hard to replicate by formulaic means, such as the transient components associated with the attack of an acoustic trumpet or oboe. In the commercial sector, the ability to imitate instrumental sounds is especially important, and impediments to the production of a realistic repertory of voices have inspired a number of manufacturers to pursue an alternative method of synthesis known as sampling. This is essentially a three-stage process of sound capture, optional intermediate processing and re-synthesis, starting with the selection of suitable source sounds that are first digitized and then loaded into a memory bank as segments of numeric audio data. A variety of processing techniques may then be employed to control the processes of regeneration, ranging from the insertion of a simple loop-back facility to allow sounds to be artificially prolonged, to sophisticated facilities that allow multiple access to the data for the purposes of transposition upwards or downwards and the generation of polyphonic textures. Although commercial samplers, like synthesizers, incorporate custom-designed hardware to meet the specifications of individual manufacturers, their general architecture comes very close to that encountered in a conventional computer. Whereas the methods employed in the design of the digital synthesizer clearly developed from earlier work in software synthesis, the progression in the case of sampling techniques has undoubtedly been in the reverse direction. As a result, many software synthesis programs, including the musicn family, now provide sophisticated facilities for the processing of externally generated sound material, and such modes of operation are gaining in popularity.

The blurring of a clear distinction between systems that rely on proprietary hardware and those that do not becomes even more evident when consideration is given to the wider spectrum of digital tools that have become available for manipulating and processing sound material of any origin, natural or synthetic. These range from simple editing facilities, which are little more than the digital equivalent of a razor-blade and splicing block, to more complex tools, which enhance the content of sound material by added reverberation, echo or chorus effects, or directly modify its spectral content by means of subtractive techniques such as filtering. The resources available for such applications range from self-contained processing units, which can be manually operated by means of controls on their front panels, to sophisticated computer-based facilities, which make extensive use of interactive computer graphics.

Computers and music, §II: Composition

4. Systems applications.

As a result of the adoption of the MIDI communications protocol as a means of networking synthesis and processing devices at a control level, many of the techniques described above can be physically integrated as part of a single system. This consolidation has been taken a stage further with the development of matching communication standards for the high-speed transfer of the audio signals themselves in a digital format between different items of equipment. The personal computer is proving increasingly important in this context as a powerful command and control resource at the hub of synthesis networks, in many instances handling both MIDI and audio information simultaneously. The personal computer has proved particularly attractive as a programmable means of controlling the flow of MIDI data between devices, and a variety of software products are now commercially available. One of the simpler modes of operation involves generating MIDI data for a sequence of musical events by means of a keyboard, the computer being programmed to register the pitch, duration and amplitude (a measure of the key velocity) for each note in a data file, and the time at which each event occurs. Reversing this process allows the performance to be reproduced under computer control, using either the original synthesizer voice or an entirely different one.

More elaborate sequencing procedures involve the layering of several performance components for a number of synthesizers using MIDI tracks in parallel, and/or direct editing of the MIDI data using graphic editing tools. Significantly, MIDI data is not concerned with detailed specification of the actual sound, merely with those characteristics that describe the articulation of its component elements in terms of note-events. A useful parallel may be drawn with the basic note elements of a musical score, for procedurally it is only a small step to the design of software that can generate traditional score information directly from MIDI data. The functional characteristics of programs specifically designed for the production of high-quality music scores are discussed in §VI below, but it should be noted that most sequencing packages provide at least basic facilities for reproducing MIDI data in common music notation, and in some the visual layout of the score is quite sophisticated.

Sequencing software represents only one aspect of the range of computer-based tools that are now available for use with MIDI equipment. These extend from composing tools, which directly generate MIDI performance data for instantaneous performance, to special editing facilities, which temporarily reconfigure the MIDI communication link in an exclusive mode in order to address and directly modify the internal voice-generating algorithms that determine the functional characteristics of a particular synthesizer. Such has been the impact of this universal protocol that most synthesis systems, whether commercial or institutional, as well as software synthesis programs such as csound, make some provision for MIDI control.

The progressive merging of hardware and software technologies means that it will soon not be possible to make any useful distinctions between hardware products such as synthesizers and audio processors and the all-purpose computer workstation with the capacity to service every conceivable music application. The increasing accessibility of powerful resources for music-making has created opportunities for everyone to explore this medium of expression, though how much music of lasting significance it will produce remains to be seen.

See also Electro-acoustic music.

Computers and music

III. Music theory and analysis

1. Introduction.

Computer applications in music theory and analysis can be broadly divided into two related fields of activity: the analysis of notated music and the analysis of music performance, in terms of both performance actions and the nature of the acoustic product. The harnessing of such powerful technology as a tool for studying the creative output of others has reaped many rewards but also some disappointments, mainly as a result of misplaced expectations as to what computer-based models of human activity can achieve. During the early years of commercial computing, linguists confidently predicted the development of programs that would automatically and reliably translate text from one language to another. Yet such goals remain elusive, not for any lack of basic computing power but more fundamentally from continuing difficulties encountered in devising programs capable of dealing with the contextual factors that alter the meanings of words or phrases from one situation to another. In a similar vein, some music researchers made overambitious claims as regards the possibility of developing computer programs that could generate works ‘in the style’ of a particular composer by applying theories of probability to a representative database of existing works. Today analysts have a much better understanding of the issues involved in modelling creativity, but it is perhaps inevitable that many projects remain distinctly speculative.

Those engaged in computer-assisted literary studies have an advantage in that their data already exists in a form that can be input directly to any conventional computer as a continuous string of alphanumeric characters according to one of the internationally recognized coding conventions, for example the American National Standards Institute (ANSI) character set, which is widely used for word processing applications. Unfortunately music notation does not readily lend itself to conversion into machine-readable form. Although it is possible to devise alphanumeric equivalents for all the pitches and durations in a conventional music score, matters quickly become complicated once account is taken of the need to time-stamp these note-events in terms of elapsed beats from the start of the score, and moreover to provide a means of reducing chords and polyphonic textures to a single alphanumeric character string. Further important decisions then have to be taken as regards what other notational aspects should be coded alongside such basic note-event information, for example bar-lines, clefs and all manner of graphic marks that may indicate important details of expression and articulation.

Since manual coding procedures are extremely time-consuming and error-prone, it is highly desirable that all the information likely to be required for any computer-based analysis of a given repertory be coded at the same time. Although some coding systems allow layers of information to be added to the primary database, such methods are best avoided. In an attempt to provide a more stable working environment, a number of general-purpose coding conventions have been proposed over the years, notably DARMS (Digital Alternate Representation of Musical Scores), which has attracted a number of disciples. Any coding system that attempts to be all-embracing, however, requires conventions that are necessarily complex and at times unwieldy. The creation of a committee to establish an ANSI coding standard for music data has provided a focus for international deliberations, but much work remains to be done.

The uncertainties over appropriate machine-readable representations of music have proved a major impediment to computer-assisted analysis, and it is thus perhaps inevitable that many of the more successful projects have concentrated on musical repertories that are notationally fairly straightforward, and have often relied on highly selective coding conventions devised by the researchers themselves. The development of advanced computer graphics facilities has led to the manufacture of optical scanning devices that can accurately recognize music symbols and thus provide a means of automatic transcription directly from the score. Although considerable progress has been made in the production of image-decoding software, such advances are constrained by precisely the same factors that have to be addressed in the design of alphanumeric coding systems; that is, what aspects of the score need to be converted into a machine-readable form, and what form this data should take. Another approach that has found increasing favour is a by-product of the development of the Musical Instrument Digital Interface (MIDI) for the transmission of performance data between digital synthesizers (see MIDI and §II, 3 above). Using little more than a simple MIDI link between a keyboard synthesizer and a desktop computer, and the services of a general-purpose music sequencing program, it has become possible to transcribe music material directly into a usable digital code, with the added advantage of immediate facilities for both visual and aural proofing of the data, the former being provided via the computer screen, the latter by the synthesizer itself.

2. Analytical applications.

In seeking to divide applications of computer-assisted analysis into separate categories it is important to recognize that many of the basic data-handling procedures relate to more than one type of application. What determines the success of a particular line of inquiry is not so much the means employed to extract data in a computer-readable form but the use that is actually made of this digital representation of musical information. One of the earliest uses of the computer as a tool for analysis, still central to many areas of research, involves the identification of recurrent features that can usefully be subjected to statistical analysis. A number of projects concerned with aspects such as the frequency and disposition of particular notes or groupings of notes in a corpus of works have been facilitated by the ability of the computer to carry out repetitive data analysis tasks reliably at extremely high speed. The relatively simple, linear construction of the vocal music of the medieval and Renaissance periods has proved particularly attractive for this type of stylistic research, leading to applications such as tests for authenticity based on known features of melodic, harmonic or rhythmic construction, the matching of text with music in terms of underlay and, in a more experimental context, empirical reconstructions of missing parts in music manuscripts. Ethnomusicologists have applied similar techniques to the study of folksong and other primarily monodic genres. Notwithstanding the increased complexity associated with Western music after 1600, repertories such as the trio sonatas of Corelli and Bach’s Wohltemperierte Clavier have also proved amenable to statistical analysis, and progress has been made with repertories from both the 19th and the 20th centuries, for example Schubert’s lieder and the works of Webern (see §IV, 4 below).

Projects that use the computer merely to extract statistical information are only speculative in so far as the researcher is free to interpret the resulting data according to his or her own criteria. A number of analyses of repertories have nevertheless sought to apply computational procedures in a more pro-active way, where important processes of decision-making are built into the analysis process itself. Particular prominence has been attached to the application of linguistic theories to the study of music, and a number of researchers have sought to develop musical models that have features in common with those developed for the study of language. It has thus been possible to develop analysis techniques that draw on structural models such as rule-based grammars or alternatively explore the meaning of musical ideas and constructs, in some instances extending to the study of perceptual features, sonology serving as the musical equivalent of phonology.

It is here, most notably in the case of rule-based grammars, that the processes of re-synthesis are most commonly applied as a practical test of the theoretical model, in terms of both small-scale features that characterize the evolution of component musical ideas and larger-scale aspects such as overall structure. The processes of analysis are thus much more closely prescribed than those employed in applications of a purely statistical nature, for they seek to identify and codify primary structural features that can then be used as the basis of a generative model. Such models are by their very nature imperfect, since we do not yet understand the workings of the human mind sufficiently to be able to model the genius of others. Advances in the cognitive sciences, notably in the field of artificial intelligence, have led to progress in the construction of analysis software that can reveal important clues as to what objectively distinguishes the music of one composer from that of another (see §VIII below).

Rule-based analysis methods presuppose that the processes of composition are bound by underlying structural principles that can be described in algorithmic terms. Many objections can be raised to the use of such deterministic approaches in the study of musical creativity, but it has been argued that by identifying those characteristics that are amenable to quantitative analysis it becomes easier to identify those that are the result of altogether more complex processes of human decision-making. At this level it becomes possible to establish links with computer-based research into musical meaning. Here progress in a musical as opposed to a linguistic context has been altogether more measured, not least because music functions at a variety of contextual levels, and it is thus very difficult to define precise terms of reference for investigations that are based on semantic principles. One useful starting-point has been the comparison of the results of rule-based modelling with different source repertories to determine why what may be grammatically correct in a particular context is not necessarily aesthetically pleasing. Such tensions between objective and subjective criteria both constrain and usefully inform the whole field of computer-assisted analysis.

3. Performance applications.

It is necessary to strike a balance between the ease with which basic score information can be input by means of a MIDI facility and the ultimate superiority of a comprehensive coding system such as darms, which has the capacity to handle almost every conceivable score detail. Interest in the former input method has been greatly enhanced by a recognition that the direct coupling of a musical interface to the computer facilitates another type of computer-assisted analysis, one concerned with the study of the performance of a work rather than simply its visual representation as a score. Here the small timing errors that inevitably occur when keying in a score become the focus of attention. A critical consideration in this context is the practical means employed for registering performance information as MIDI data. In terms of physical appearance there is little to distinguish a conventional MIDI keyboard from an acoustic piano. To a concert pianist however, the synthetic ‘feel’ of the former will materially affect the quality of performance data thus registered. What is required here is an acoustic instrument equipped with sensors, which can detect performance actions but not actively interfere with them during the period of their registration. The Disklavier, manufactured by Yamaha, provides just such a combination of a conventional piano with MIDI recording and playback facilities, and similar engineering projects have led to the manufacture of a range of sensors for use with string, woodwind and brass instruments.

The analysis of instrumental performance characteristics points the way to potentially the most challenging sphere of computer-based music analysis, that concerned exclusively with the sonic result. Psychologists have long been concerned with the reception and interpretation of musical sound, but only recently have suitably powerful and sophisticated tools become available for the detailed extraction of component features from composite sound images. Here the processes of analysis and re-synthesis have found a number of intriguing applications, for example as the basis for automatic accompaniment systems that can accurately track the performance of a soloist. Although such advanced techniques of acoustic analysis are still relatively new, they hold important keys to future investigations into the finer details of performing practice, and their signficance must not be underestimated.

Computers and music

IV. Historical research

Computer applications in music cannot work in any homogeneous way because the possible topics of interest – composition, performance, source filiation and dissemination, lives of musicians (individually or collectively), details of musical content, the perception and understanding of music, the physical aspects of sound, organology and so forth – are so heterogeneous. Almost any topic covered in this dictionary may be a potential object of study involving the use of a computer. Indeed the significantly expanded scope of this edition of the dictionary is a tribute to the value of computers, which have stimulated publishing and bibliographical management enormously since the 1980 edition.

1. Common issues and usage paradigms.

2. Bibliographical and thematic search tools.

3. Full-text resources.

4. Tools for the study of monophonic music.

5. Tools for the study of polyphonic music.

6. Databases of acoustical material.

7. Databases of graphic information.

8. Other electronic resources.

9. General issues.

Computers and music, §IV: Historical research

1. Common issues and usage paradigms.

Some issues of content and access are common to all kinds of research project. These include the medium of intended access (paper, CD-ROM, Internet delivery etc.); whether, if the means of dissemination is electronic, access is provided to all of the data or only to selected parts; whether the data are searchable and, if so, by what criteria; whether the database is designed for periodic updating; and whether the results of one application can be fed to another for further processing. Quality issues also abound: how were the data gathered? how rigorously have they been verified? how well are they maintained? are they designed for extension and elaboration to answer new kinds of questions in the future?

From a procedural perspective the most fundamental point of distinction among computer-assisted studies is whether they seek to produce (and therefore are likely to store and manipulate) textual, statistical or audible results; that is, whether they are concerned with verbal data, symbolic representations of musical material or acoustic material. All are relevant to studies in music history principally because they enable the creation of large databases of selected materials.

Computers and music, §IV: Historical research

2. Bibliographical and thematic search tools.

The most broadly accessible computer-based sources are bibliographical ones. Most readers will be familiar with electronic catalogues used by libraries and with electronic indexes of various kinds. Thematic-search algorithms, however, have a long history that extends back through at least the 19th century to idiosyncratic but highly serviceable systems for encoding and searching hymn and folk tunes. Around 1900 these often depended on the use of Tonic Sol-fa, a relative-pitch representation scheme developed in England. Many bibliographical tools now exist only in electronic form because of their enormous bulk. Before the age of electronic communications such resources might have been compiled at one location but would not have been accessible to users at other fixed locations. Such tools are now a common part of the ‘virtual’ world of scholars and musicians.

Among large-scale collaborations, the most notable are the Répertoire International des Sources Musicales (RISM) and the Répertoire International de la Littérature Musicale (RILM). In the case of RISM the computer was initially used to manage text data relating to the worldwide cataloguing of musical sources. RISM, which now consists of multiple databases, sets a benchmark for all projects that integrate identifying textual and musical information. Searches of the RISM A/II series database, for example, a multi-field index of several hundred thousand musical manuscripts from the period 1600–1825, have revealed unexpected coincidences between works attributed to major composers (e.g. Mozart) in one source and to minor or anonymous composers in other sources.

Other ambitious projects in music bibliography, such as Lincoln's work with the published sources of the 16th-century madrigal and motet and LaRue's survey of sources for the Classical symphony, have similarly focussed on identifying, through the creation of musical concordances across large repertories, the composers of works that are unattributed in one source. Such projects have other motives as well, such as discovering correlations of basic musical content that may have been obscured by surface difference (caused by transposition, ornamentation, instrumentation and so forth).

Computer representation, control and output of musical data have also facilitated the creation of a large number of thematic indexes. Although conversion of the data to a common format remains a goal for the future, the simple availability of printed reference works has been an enormous boon in the location and identification of sources. It has contributed to an ever greater appreciation of the extensive activities and accomplishments of many previously little-known composers.

A still unresolved issue in thematic searching is the definition of musically sensible methods of query. One-dimensional text-based search protocols produce many false results. Here the problem of generalizing musical information asserts itself: queries demand specificity in the formulation of the question, but different kinds of music may have few features in common. In melodic searches, for example, contour information (generalization) is valuable for queries related to recognition and to overcome the apparent differences introduced by transposition. It is also important in psychological research. Contour information can be misleading, however, for bibliographical searches where exact pitches may be essential, and for source studies where enharmonic differences in notation may signal evidence of divergence in scribal opinion. For some searches rhythmic and/or metrical information is desirable, where for others it is considered incidental. The growing tendency is to allow users the opportunity to select their own search strategies. In David Huron's ThemeFinder tool several thousand extracts from instrumental works of the 17th to the early 20th centuries may be searched at seven levels of detail in order to accommodate diverse motives for queries.

Computers and music, §IV: Historical research

3. Full-text resources.

The ability to make rapid electronic searches of encoded texts adds value to works previously accessible only in printed form. Computer searches of the complete texts of dozens of works can sometimes be made with a single command. The biggest full-text database in musicology is concerned with the history of music theory: Thesaurus Musicarum Latinarum (TML), a compendium of several hundred Latin writings on music theory from the 6th to the 16th centuries. In TML, notation is represented using abbreviations for the standard Latin nomenclature (brevis, longa etc.), and notational patterns can thus be searched by these descriptive labels. TML is a collaborative project directed and managed by Thomas Mathiesen at Indiana University.

Multiple encodings of both text and music of the writings of Zarlino occur in a repository on CD-ROM, the first issue of an intended series called Thesaurus Musicarum Italicarum. Designed by the Dutch musicologist Frans Wiering, it contains scanned images, fully encoded texts (including Zarlino's numerous interpolations of Hebrew, Greek and other ancient languages with non-Roman alphabets), text-authority tables (for example to facilitate finding the same surname with multiple spellings), digitized images of drawings and music examples that are, where possible, presented in three ways – as scanned images, as darms encodings (see Printing and publishing of music, §I, 6) and as MIDI files (see §§II and III above). This multi-format multimedia tool seems likely to set another benchmark for many years to come.

Another kind of textual study is that in which details of the source itself are encoded together with its content. In Pinegar's study of the Notre Dame repertory, for example, the Latin abbreviations that differentiate the transcription styles of various anonymous scribes are encoded to facilitate grouping of sources by transcriber.

Computers and music, §IV: Historical research

4. Tools for the study of monophonic music.

The encoding of complete musical works in large quantities is a daunting task. The first accumulation to be made widely available was the folksong archive designed by and encoded under the direction of Helmut Schaffrath (1942–94). These materials, which originally concentrated on songs in German-speaking lands and adjacent regions, are encoded in one field only of a text database. Information about place of origin and other details can be retrieved together with (or, if the user desires, separately from) the work's encoded melody. The Essen folksong collection (so called after its original home at the Essen University Hochschule für Musik) grew to encompass materials from many other parts of the world (China, Australia, Israel, Poland, Central and South America etc.). The musical data has been translated into several other formats and various kinds of queries can be made. The project is now maintained by Ewa Dahlig at the Polish Academy of Sciences in Warsaw.

A considerable number of projects concerned with various chant repertories have made extensive use of the computer for data representation and control. Partly because of our ignorance of its interpretation, chant is simpler to handle in the computer than most other repertories: it is legitimately monophonic, we often do not know how to assign precise durational values to neumes, and it has no issues of instrumentation or ornamentation. In short, there is relatively little that is secure enough to encode. Nonetheless, the encoding of chant and folksong materials has facilitated numerous studies of significant interest related to centonization, investigating, for example, how tune families are related. Only through recursive study (the successive feeding of results to new queries) can such questions become refined and persuasive answers found.

Computers and music, §IV: Historical research

5. Tools for the study of polyphonic music.

The largest project devoted to the creation of databases of standard repertory (in the form of machine-readable scores and parts) was initiated by Walter B. Hewlett in the early 1980s. Now maintained by the Center for Computer Assisted Research in the Humanities at Stanford University, the databases contain hundreds of works, chiefly from the 18th and 19th centuries, by composers such as Bach, Handel, Haydn, Mozart and Beethoven. The encodings, which attempt to preserve all essential details relating to notation, sound and source idiosyncracies, are eventually translated into several formats different from that in which each is initially captured in order to support diverse application types (see Printing and publishing of music, §I, 6).

Another potential source of score databases resides in the large number of collected editions that are now created from machine-readable material. That these resources are not generally regarded as electronic reflects confusion about rights and obligations and the absence of a common format for electronic distribution. Sometimes the format is not even a common one within the edition (e.g. Verdi, Mozart). Encodings undertaken with electronic distribution as the primary or only means in mind tend, regrettably, to concentrate on out-of-copyright editions, thus ignoring discoveries, re-readings and re-attributions of the past 75 years. Greater collaboration and cooperation would be of enormous benefit to scholars and performers of the future.

Among more specialized projects, the work of John Stinson and his Australian colleagues in encoding several thousand polyphonic works of the 14th century (in the scribe project) has demonstrated how many different goals can be pursued in parallel. The encoded materials have produced scores for performance, have preserved important information about the original disposition of the notation and have been used in analytical projects of various kinds. An ambitious approach to questions of authorship and attribution was taken by Lynn Trowbridge in a project concerned with a corpus of roughly 100 works from the Renaissance. Through complete encoding (in darms, with extensions for mensural notation) and extensive statistical evaluation, Trowbridge was able to assess the disparate claims for Binchois, Busnoys, Du Fay and Ockeghem.

Efforts to define those traits that differentiate one composer's style from another's are approached in a totally different way in the work of David Cope, a composer at the University of California at Santa Cruz. Cope's program Experiments in Music Intelligence (EMI) identifies and stores small, recurrent melodic, harmonic and rhythmic traits (called ‘signatures’) and recombines them to create new works in the style of a designated composer. Among the composers whose styles EMI simulates are Palestrina, Bach, Mozart, Chopin, Prokofiev, Joplin and Rachmaninoff. Employing the same principles, EMI has also succeeded in composing imitations of such diverse repertories as Broadway ballads and Balinese gamelan music. When EMI's signatures are combined with general information about the ‘parent’ works, micro-chronologies of style change can be traced at a level of precision that has not previously been possible.

Computers and music, §IV: Historical research

6. Databases of acoustical material.

With the advent of digital recording, stored performances have come to constitute another potentially important source of ‘data’ for computer applications. The analysis of musical performance based on recordings shares some obstacles with the nascent art of analysing computer-generated music: streams of acoustical material are not so easily segmented and structured as the score-based data on which manual analyses have for so long depended.

The aim in performance analysis is usually to document idiosyncrasies or progressive changes of interpretation. The aim in the analysis of electronic music is more fundamental: to establish basic concepts for understanding highly experimental and heterogeneous repertories. MIDI files that represent real-time performances, in the manner of piano rolls, may also capture such performance information as deviation from a general pulse.

Acoustical information that is disembodied from complete works is also proving to be useful. One database of the sounds of orchestral instruments is sufficiently grounded in numerical parameters to facilitate its rapid incorporation into protocols for perceptual research. MIDI files and other representations of sounding works have been used to experiment electronically with tempos and tempo changes, orchestration and dynamics. Such experiments may focus not only on the ordinary variables of potential performance but also on the implementation of past theories of musical performance – proportional notation in Renaissance music, ornamentation in the Baroque, prescriptions for tempo rubato in the Romantic period and conducting preferences in the recorded repertory.

Access to acoustical information is sometimes most valuable when it is provided through documents consisting mainly of text. Thus, Sanford's performed examples of Baroque ornaments (1995) are more informative than written descriptions of them would be.

Computers and music, §IV: Historical research

7. Databases of graphic information.

The establishment of databases of graphic information has been encouraged by two factors: the quick access provided by the World Wide Web and the need for collectors of information to store and manage images. Graphic capture has been used in the disclipline in all the same ways as photography: to store and compare handwriting samples and watermarks; to create digital catalogues of musical instruments and other artefacts; to preserve the original appearance of material later transcribed and printed; and to study anomalies of musical notation.

A growing use of digital images is that which makes available whole libraries of actual music – sheet music, scores, parts, manuscripts, early prints and so forth. The camera is indifferent to the subject. These serve the traditional purpose of providing detailed information suitable for study or performance, but they can also be arranged to serve the purpose of browsing: the user can scan hundreds of sources in search of a particular one, which can be recognized by image where words alone would not be conducive to a match. By scanning large quantities of material, users may also gain insight into whole repertories of which they have little knowledge. It is not currently possible to search databases of graphic images without verbal or numerical handles of some kind. Graphics files have proved to be a useful addition to pedagogical software for music history, theory, appreciation and organology.

Computers and music, §IV: Historical research

8. Other electronic resources.

It is difficult to foresee which of the computer's roles in music research may be most important in the future. A widespread belief in the sciences – that electronic journals will supersede paper ones – has yet to catch hold in most humanities and performing arts disciplines. Yet some early entries in the field of music are very respectable: Music Theory Online, initiated by Lee Rothfarb in 1994, appears at frequent intervals with articles by a large number of reputable scholars. The electronic Journal of the Society for Seventeenth-Century Music, initiated by John Howard in 1995 and edited by Kerala Snyder, has also taken maximum advantage of the supplementation of text with graphics and sound. The online availability of Doctoral Dissertations in Musicology (from 1997) has greatly speeded up access to information on research in progress. All three resources are accessible via the World Wide Web.

Computers and music, §IV: Historical research

9. General issues.

As databases grow larger and more accessible, issues related to authorization of use become increasingly pressing. The obstacles are not specific to music applications but they are particularly difficult to answer because of the diversity of data types and the possibility of myriad degrees of completeness. Many machine-readable resources of potentially great importance remain unavailable for legal or administrative reasons. Intellectual property issues discourage the development of certain kinds of applications that are technically feasible.

Electronic publications remain invisible to those who do not have access to computers. Print technology was relatively stable and predictable for almost 500 years, but computer technology changes at frequent intervals, and it is by no means clear what will succeed the World Wide Web. Despite such uncertainties, functional applications promise to change the nature of scholarship – for example by bringing historical, theoretical and acoustical data into single applications (e.g. Craig Sapp's multimedia extensions to ThemeFinder), by removing the need for time-consuming repetitive tasks, and by facilitating the visualization of material that in the past could only be experienced temporally.

Computers and music

V. Ethnomusicology research

Explorations into the possible uses of computer technology in ethnomusicology are still very much in their infancy. Until the late 20th century attitudes toward computers within the discipline were at best ambiguous; as symbols of modernity, they clash with the preoccupations with ‘authenticity’ and ‘exotic’ cultures that still permeate much ethnomusicological thinking. But it is also true that the kinds of issue that typically engage ethnomusicologists – the relationship between musical style and social context, processes of musical change, the construction of meaning in music – cannot easily be accommodated to the statistical correlations and simulations facilitated by computer technology. With developments in the realms of digital sound processing and of hypermedia, however, new areas of exploration have emerged. More compatible with the concerns of the discipline, these possibilities are being embraced with greater enthusiasm by the ethnomusicological community.

1. Computers as analytical tools.

One of the first attempts to use computer technology in ethnomusicological research was the Cantometrics project, headed by Alan Lomax in the 1960s. In order to establish cross-cultural correlations between stylistic musical features and aspects of social structure, Lomax and his team created large cross-cultural data banks of musical examples derived from over 200 different cultures. Cantometrics has been strongly criticized for its essentialized conceptualization of culture and for its decontextualization of musical meaning. Even where correlations have some degree of universal validity, the conclusions are so general that they hardly seem to justify the effort required by the methodology.

Computer-assisted analytical procedures have been more successful, however, when used in conjunction with ‘traditional’ field methods. Scholars have complemented in-depth field research with the use of extensive databases: Hae-kyung Um, for example, has been able to establish correlations between the demographic profile of Korean migrants in the former USSR and aspects of their attitudes toward Korean musics and the musics of the host communities. A methodological approach of this type permits researchers to extend the geographical validity of their ethnographic data, to encompass related communities visited for relatively short periods.

Ethnomusicologists have also used computer technology to help analyse specific musical systems. Pioneer research in this sphere was conducted in the 1980s by James Kippen and Bernard Bel, who drew on the principles of generative linguistics in their study of the North Indian tablā repertory. Accepting the notion that a certain musical rationality governs decisions made by performers, they attempted to determine the ‘grammatical’ rules acquired intuitively by tablā players to produce culturally acceptable variations of qā'idas (fixed theme-and-variation tablā compositions), from which further variations are derived through permutation, repetition and substitution. Using these rules, they created a computer program that generated qā'ida variations as a means of gaining insight into the compositional procedures used in tablā performing practice. While musical styles can be identified by a set of conventional features, a performer's competence is often judged in relation to his or her ability to break the rules successfully. This raises the question as to whether it is possible to construct a computer program that is capable of distinguishing between conventional motifs and creative innovation.

2. Digital sound processing.

The development of accessible and user-friendly software packages for the digital processing of acoustic signals has been greatly welcomed within ethnomusicology, particularly as an aid to editing field recordings. It is specially useful for restoring historical field recordings, and a number of archives are digitizing their collections of wax cylinders, making them more widely available. Digital sound processing has also had an impact on the transcription of field recordings. The Music Mapper, one of the first computer applications to facilitate transcription, was designed in the late 1980s by Katherine Vaughn; Emil H. Lubej subsequently developed the EmapSon, a software package that produces a sonagram capable of isolating distinct pitches in polyphonic musics. Although most ethnomusicological transcription up to the end of the 20th century was done aurally, computer-assisted transcription will probably become the norm in the near future.

3. Hypermedia.

In ethnomusicology, as in other disciplines, developments in hypermedia have had a significant impact. CD-ROM and on-line databases of collections in music archives and libraries are proliferating, and many include links to sound files and other material of ethnomusicological interest. All the leading ethnomusicological societies have websites, which provide worldwide links to other relevant sites. A number of on-line journals (Ethnomusicology On-Line, Music and Anthropology, Oideion) have appeared since the mid-1990s; although these publications are typically linear in format, they allow authors to illustrate texts with recordings and video clips, as well as with the graphic forms common to printed media.

Ethnomusicologists have found hypermedia to be particularly well-suited to the presentation of descriptive ethnography, and many have created personal webpages to provide additional ethnographic information – such as sound files, transcriptions of interviews, life histories of informants, maps, photographs and video clips – to complement their publications. A few ethnographic sites are self-contained entities, such as the website/CD-ROM Venda Girls' Initiation Schools (designed by Suzel Ana Reily and Lev Weinstock). Based on John Blacking's field data, the project was conceived as a ‘virtual field site’, to provide users with a variety of media through which to glimpse Venda ritual life in the later part of the 1950s.

Ethnomusicologists have also been exploring the educational potential of hypermedia. A series of interactive hypercard stacks created by Richard Widdess provides students with an excellent introduction to ethnomusicological debates on musical transcription. An interactive website designed by C.K. Ladzekpo introduces students to African music and dance; along with informative commentary on a wide variety of musical styles, practical exercises explicate the complexities of African polyrhythmic structures. A website on the fiddler Clyde Davenport (designed by Jeff Titon) provides an interesting illustration of how the researcher's musical perceptions of stylistic affinity can contrast with those of the tradition-bearer. The internet too has become a field of ethnomusicological inquiry, as a growing number of young researchers investigate the formation of musical communities in the virtual spaces of listservs and chatrooms.

Computers and music

VI. Music publishing

1. Traditional methods.

The publication of music in traditional print form involves first encoding the music and then editing its graphics image to produce scores and/or parts (see also Printing and publishing of music, §I, 6). There are many ways to enter the music, but none is so perfected as to render editing of the visual image unnecessary. In this model, the music remains in a fixed form.

In classical music great progress has been made since the mid-1980s in publishing the large collected editions that represent the core of our knowledge of the standard repertory. Those who leaf through the volumes of the Neue Bach Ausgabe or the Neue Mozart Ausgabe, begun in the 1950s, can witness the transformation of publishing from the elegant fonts and layouts of traditional music typography through a range of early systems for computer typesetting of music. Many bizarre results can be seen in the music examples of books published in the 1980s and early 90s, when several dozen systems for music typography were in gestation and users were often so triumphant in controlling the process of music typography that they seem not to have noticed how many compromises were made in appearance, legibility and the visual grammar of common notation that had for so long been taken for granted.

Although most popular music prints and a substantial amount of classical music published in Europe and North America are now produced electronically, relatively little use is currently made of the full potential of computer technology to modify and update files endlessly, that is, to produce an endless stream of versions of music.

The publication of music by computer remains a laborious task. Substantial human labour is required either to input manually the codes that represent the music or, if the music is ‘acquired’ with the help of an electronic keyboard or an optical scanning system, to correct the errors of the provisional output. Electronic tools to search for features of style, analogous to those used for searching text files, are not yet widely available.

2. Databases.

In some cases, generally confined to the scholarly community, the codes used to produce printed scores and parts are made available directly to users through a database. Any code that adequately represents the repertory in question can be used for this purpose. The data may represent complete musical works with little textual or explanatory material (as in the case of the MuseData repertories encoded by the Center for Computer Assisted Research in the Humanities at Stanford University). It may constitute only one field of a database concerned primarily with textual or explanatory material (as in the case of the RISM A/II database).

Some of these encodings may have been designed for a purpose other than printing (for example for analysis) but may be exportable to a program that facilitates notational display (e.g. Humdrum data, which is encoded for analysis but can be exported to and displayed in theprogram called MUP). Database programs are valuable when the quantities of material are vast and/or when a continuing need to revise or expand the content can be foreseen.

3. The Web.

A third mode of publication in which the computer plays a role is that facilitated by the World Wide Web. The Web can be used as an interface to databases and electronic archives of various kinds. Bit-mapped images of printed pages of music are currently the most popular medium for use on the Web. The fluidity of environment that both encouraged and crippled many programs and projects involving the production of printed music in the 1980s now characterizes Web access software: the outcomes of battles waged far beyond the control of musicians, music-lovers and music scholars will determine whether the uniform interface now used to make material available on the Web will endure.

The Web has the ability to link sound and graphic files with text. Thus online journals in music, such as Music Theory Online and the Journal of the Society for Seventeenth-Century Music, can provide samples from recordings or examples of methods of performance as audio files that can be heard while the online text is being read.

Web publications are subject to the same liabilitiesas printed ones under national and international laws relating to intellectual property, but the subject of jurisdiction in virtual space is problematical. In addition, many potential providers of material are ignorant of all such laws. Credits, acknowledgements and statements of ownership are easily separated from the content to which they pertain. Laws covering intellectual property vary from one country to another. The ultimate disposition of material in the ‘virtual space’ provided by the Web has yet to be determined. When a user has the technical capability of modifying material, the provider's claim to ownership, which rests on a ‘fixed form’, can be diminished (see also Copyright).

4. CD-ROM publishing.

The publication of databases and software on CD-ROM resolves the issue of preserving and distributing information in a ‘fixed form’. Hence commercial enterprises may prefer this method of publication (or its technically upgraded successors) for some time to come. The capability of CD-ROMs to provide links to indexed points on sound CDs has stimulated the proliferation of music-appreciation titles. Specific single works can be packaged with a textual apparatus explaining a composer's background, the musical genre of the work and some analytical details (all possibly illustrated with paintings, diagrams or other graphic material).

When audio sound is linked with database software, a novel kind of archiving can be achieved. For example, in the CD-ROMs produced by the IDEAMA project (a collaboration of the Center for Computer Research in Music and Acoustics at Stanford University and the Zentrum für Künste und Musiktechnologie in Karlsruhe), a historical archive of electronic music is coupled with its own catalogue.

Computers and music

VII. Music education

1. Classroom software.

In addition to the training in computer music technology that followed inevitably from the growth of mass-produced studio and stage equipment, computers found a place in broader music pedagogy as agents of instruction, particularly in universities. Early developments, a number of which were based on the open-ended plato system (University of Illinois, from 1959), took advantage of the repeatability of computer programs and their ability to judge answers as straightforwardly correct or incorrect. The most typical application was to aural training, particularly in the recognition of intervals and chords.

Such programs became more widely available by the mid-1970s as computer systems decreased in cost and size, and constructive attempts were also made to develop ‘courseware’ (i.e. software for teaching) in other musical fields such as melodic dictation and part-writing. There was, and remains, considerable duplication of effort in the development of such drill-and-practice courseware between different institutional centres of activity. In part this may be said to have arisen from differences of pedagogical opinion, but it is also true that the advent of microcomputers (notably the Atari ST, which had built-in MIDI facilities, and the Apple II) made the development of the most basic courseware feasible for many who were primarily musicians rather than computer scientists.

Among later and more complex rule-based courseware, the palestrina program for the Apple Macintosh (D.E. Jones, Dartmouth College, 1987) was impressive in its ability to diagnose errors in two-part species counterpoint exercises, giving detailed and immediate feedback to the student as to the rule(s) that had been transgressed. A program of this nature embodied a level of development expertise that could not readily be reproduced to meet the pedagogical priorities of individual institutions. The dissemination of music courseware even of this quality was restricted, however, not merely because of the limited number of potential users, but also in many cases because of a form of consumer resistance known as the ‘not-invented-here syndrome’, representing a lack of pedagogical flexibility on the part of instructors and course designers.

Concern for better communication led to the foundation of organizations such as the Association for Technology in Music Instruction (USA, 1975–) and the support of university music education under the Computers in Teaching Initiative (UK, 1989–99). Taken together, interest groups of these kinds can be said to have fostered not only the development and dissemination of courseware, for example under the Teaching and Learning Technology Programme (UK, 1992–), but also its embedding in educational practice. In an explicit change of emphasis, the European Academic Software Awards programme gave primacy to embedding in the late 1990s, reflecting a widespread departure from the idea of using the computer as a substitute instructor, towards making best use of technological resources alongside teachers, libraries and other services within a total learning environment.

In practice, embedding courseware has implications not only for course design but also for classroom technique. This was true even of stand-alone programs designed to support specific learning tasks: palestrina, for example, had a repertory of stylistic criticisms that it could apply to exercises that were technically correct, and these could be used as a starting-point for class discussion of subjective questions of style and technique, perhaps leading in turn to a broader consideration of the value of rule-based theories for the historiography of musical composition. Similarly, in courses on post-tonal analysis the use of a computer program rather than pen and paper to identify pitch-class sets (many systems for this were developed) allowed the pedagogical focus to shift rapidly from a technical to a conceptual level. Stand-alone programs designed to support skills used in Schenkerian analysis were also developed (e.g. J.W. Schaffer, University of Wisconsin, Madison, 1990; A. Pople, M. Pengelly and K. Kirkpatrick, University of Lancaster, 1996).

The embedding of open systems demanded a different approach that was likely to involve both students and teachers in some kind of development work to suit local needs or simply as hands-on training in the use of technology. The most prominent of these systems included c-sound (B. Vercoe, MIT, 1986–), the lisp kernel (J. Rahn, University of Washington, 1984–) and the composers desktop project (several universities including York, Keele and Huddersfield, together with private individuals, 1987–). Most of the resulting pedagogical applications were in the field of electro-acoustic composition, for which technology training was in any case clearly desirable. Open systems for music analysis included morphoscope (M. Mesnage, Brussels, 1994–) and the humdrum toolkit (D. Huron, University of Waterloo, Canada, 1994–). Brinkman's comprehensive guide to pascal programming for musical purposes (1990) originated in his courses at the Eastman School of Music and was intended to facilitate similar teaching elsewhere.

By the mid-1990s degree programmes in music technology had multiplied around the world. Among courses in computer applications outside composition, the course in computer applications in musicology developed at the University of Utrecht was notable on account of its outstanding course materials (F. Wiering, 1989). The use of computer technology in pre-university music education and lifelong learning was encouraged by major electronic instrument manufacturers, who put significant resources into the support of educational projects in a number of countries.

The advent of widely affordable multimedia systems facilitated the development of further kinds of courseware and learning resources. Music was a natural subject for educational multimedia because of the many possible interactions of sounds, text, pictures and/or video involved. Whereas ordinary multimedia titles, delivered on CD-ROM, were bound to include music on the CD-ROM itself, music courseware had the facility to rely on the use of normal audio CDs. These could be played through a computer from its CD-ROM drive, while coordinated educational material ran simultaneously from another storage medium such as a hard disk.

2. Multimedia resources.

Unlike the vast majority of earlier courseware packages which were issued by institutions and small companies, a considerable number of multimedia titles making use of CD audio were produced commercially by companies such as Warner New Media and Voyager. Outstanding among these was a guide to Beethoven's Ninth Symphony, incorporating material devised and written by the musicologist Robert Winter, with the audio tracks included on the CD alongside the text and graphics (1988). This and a few other titles were later reissued with greater market prominence by the Microsoft company in tandem with other packages in the genre of ‘infotainment’, for example about dinosaurs. A principal characteristic of such packages was their use of hypertext links, providing a means by which users were free to follow a chain of concepts as if at random through a system of instant cross-referencing. Taken to extreme, this was akin to reading a book piecemeal by means of its index; developers generally took care also to include conventional fixed paths through the material in order to convey such constructions as historical narratives and chronological descriptions of musical passages.

Many less commercially ambitious CD-ROM guides were produced by individuals and teams working in educational institutions. Like most if not all of the Voyager and Warner releases, these were typically developed using readily available if unsophisticated software such as Apple's hypercard, Allegiant's supercard and Asymetrix's toolbook. Dedicated packages were also produced to allow musicians with no programming experience readily to prepare educational materials that were presented in coordination with the continuous playback of audio CDs through a computer system (e.g. M. Pengelly and A. Pople, University of Lancaster, 1996). Packages such as hypercard were also used to develop teaching materials that did not require continuous audio, and as the basis for a new generation of aural training packages, few of which made serious pedagogical advances on their precursors.

3. Web resources.

It was clearly to be expected that such materials, like earlier couseware, would be used for self-paced instruction, possibly within structured courses but certainly at times and in places convenient to the individual learner. This emphasis on the user's choice and discretion was greatly augmented through the rapid expansion of the World Wide Web as a form of international self-publication in the mid-1990s. To the reader, Web-based materials seemed to follow modes of presentation familiar from stand-alone multimedia titles, but their delivery came in fact through quite different technology, embodying seamless communication across continents between networked computers. The quantity and range of material on offer was virtually beyond human comprehension, and the normal method of serendipitous access was known as ‘browsing’. At the same time, small-scale configurations of material could be structured with a view to communicating linked concepts in a coherent order, if this was desired.

As it matured, the Web seemed likely to satisfy a number of key requirements for educational software. It was easy for readers to use, it enabled students and teachers to accumulate, develop and share material, and with the aid of powerful search facilities it enabled information to be located even when its existence was merely surmised and its whereabouts unknown. It provided access to library catalogues and online journals, thus linking coherently with earlier forms of information delivery. It could simulate stand-alone multimedia courseware and support structured courses, but in such a way that the reader could at any point seek information further afield with minimal distraction from the task at hand. Against this, it seemed that international copyright laws, which had been slow to keep pace with the practices of Web-page authors, might severely restrict the quantity of materials available if suitably amended and enforced. Moreover, it was not always feasible for students to distinguish high-quality sites from others that might mislead or misinform them, and the scale of the Web made it impossible for teachers to assess in advance the material their students might see with this in mind.

Above all, the last years of the 20th century saw the computer reach the status, in the richer countries of the world at least, of an everyday item of consumer technology. Its educational uses reflected this in their increasing division between open systems making use of readily available and easily used software, and dedicated systems which in many cases had reached the stability of successive upgrades and an established user base. It seems likely that within a short period of time the use of computers in music education will be completely unremarkable.

Computers and music

VIII. Psychology research

Developing computer technologies have made a significant impact on both the conduct and the nature of enquiries into music psychology. In traditional laboratory work this grew out of a general recognition that synthesized sound was able to provide well-regulated stimuli for research into psycho-acoustics and other fundamental aspects of listening. This was exemplified in the classic experiments by Shepard (1964) which relied on sounds unobtainable by other means. By the early 1980s it was common for the presentation of such stimuli to be controlled by computer for methodological reasons, allowing investigators greater rigour in the exclusion of confounding factors in perception, by such means as generating randomized orderings of stimuli and maintaining constant time-intervals between the presentation of successive items.

From around the same time there was significant growth in the availability and sophistication of computer-controlled musical instruments, eventually coordinated through the MIDI standard. These devices enabled psychologists to begin to address criticisms that many laboratory experiments had done no more than investigate phenomena so far removed from typical listening behaviour as to be irrelevant to processes of musical thought. Musical extracts could now be presented in a form that might reasonably be taken to represent ‘real’ music played by humans on conventional instruments, while allowing the investigator to maintain precise control over potentially salient features such as momentary variations in speed, intonation, timbre and dynamic level. Conversely, the deployment of such nuances in human performance of music could itself be studied through the analysis of data obtained from individual performances on instruments linked directly to computers, as seen in the work of Clarke (1984–5, 1995) and others on expressive microtimings.

For scientists and theoreticians working in artificial intelligence (AI), a discipline that seeks to model human thought and behaviour using computers, music as a domain of enquiry has never been as important as the machine understanding of logical reasoning, natural language or the visual world. Nonetheless, AI researchers of the calibre of Marvin Minsky, Terry Winograd and Stephen W. Smoliar are among those who have contributed to the subdiscipline of AI-music. Winograd's study of harmonic syntax (1968) remained impressive for decades after its publication and was paradigmatic in its synthesis of elements from traditional music theory and structuralist grammatical theory within the then current algorithmic approach to computation.

Many subsequent projects were likewise indebted to the relative ease with which pre-existing theories of music could be expressed as formal rule-based systems. It was perhaps in consequence of this that such newer developments in mainstream music theory as were overtly cognitive in orientation were frequently overlooked by AI-music researchers. Similarly, the frequent choice of restricted but well-understood corpora as a focus of investigation, such as J.S. Bach's chorale harmonizations, was brought about by a desire to build on the existing body of informal meta-level musical knowledge, following the paradigm of an ‘expert system’. Since the output of expert systems must be testable against what can be produced by humans, it was common for the goal of such research to be the composition by computer of short musical works intended to fall within clearly recognizable styles.

A number of teams and individuals contributed to the development of this line of investigation. Kemal Ebcioğlu (1992, pp.295–333) produced a complex system of rules and heuristics that allowed a computer to harmonize chorale melodies in the style of Bach. Baroni and his co-workers (1984) produced rule-based grammars by means of which a computer could generate passable imitations of Lutheran chorale melodies and Legrenzi arias. Researchers such as Steedman (1984–5), Giomi and Ligabue (1986) developed generative grammars that could model jazz improvisation. Longuet-Higgins and his team (1976, 1983–4, 1989) computed rhythmic and metrical descriptions from sequences of pulses, as if to model human perceptions of these phenomena. James Kippen and Bernard Bel sought to discover a rule-system underlying improvised variations in North Indian drumming: first by analysing the performances of master drummers to derive a grammar, second by computing new variations in accordance with the grammar, and finally by submitting these computer compositions to expert appraisal in order to fine-tune the grammatical rules.

Many others contributed to the development of associated concepts and techniques, but not all who worked on projects of this kind found it necessary to reach the stage at which their theoretical designs were implemented as working computer systems. Some researchers published descriptions of projects planned or in progress; for others, the stimulus of the computer as a metaphor for human thought processes, open to question though this might be, was sufficient to guide them towards highly developed formal descriptions that were to all intents and purposes an end in themselves. The most complex of such systems (e.g. Laske, 1986; Leman, 1995) constituted detailed structural descriptions of the knowledge that was presumed to underpin musical styles or activities.

If the properly psychological claims of such work were at best debatable, something of the converse applied in work that made use of ‘artificial neural networks’, since these were held explicitly to model the physical workings of the human brain. This ‘connectionist’ technique was regarded as a breakthrough in mainstream AI research when it was formulated in the late 1980s in the wake of debates about whether knowledge resides principally in rules or in procedures. Whereas a rule-based system requires the basis of a computer's decision-making to be made manifest, and normally to be specified down to the last detail by the human researcher, a neural network is set up in an open-ended fashion and ‘trained’ to accomplish the task of generating appropriate outputs from specific stimuli, during which process the network organizes itself in ways that are not fully specified by the investigator.

The procedures deduced by a trained neural network are amenable to forensic scrutiny and are typically found to be analogous to formal rules proposed by humans, albeit with serendipitous features that allow networks to react sensibly to unforeseen stimuli and to behave with some of the vicissitudes of human thought. Bharucha and his co-workers (1987–8, 1989) used self-organizing networks to model the cognition of the scales and simple chords of both Western and Indian music, while Gjerdingen (1989–90) developed a network capable of recognizing a wide range of musical events in the early keyboard sonatas of Mozart. Peter Desain and Henkjan Honing (1992) worked within a broadly connectionist ethos to develop systems for the investigation of metric and rhythmic cognition that could accommodate variable expressive nuances of timing rather than relying on undifferentiated symbolic pulses.

AI research in music seems bound largely to follow the trends of its parent discipline rather than to pursue an agenda set by the broader musical community. But as technology progresses, computers and human beings are likely to become more equal as partners in the composition and performance of music, even beyond the leading centres of research and development. This being so, the nature and plausibility of artificial intelligence will come to assume an even greater significance.

See also Hearing and psychoacoustics and Psychology of music.

Computers and music

BIBLIOGRAPHY

and other resources

general

electronic databases

composition and synthesis

music theory and analysis

historical research

music education

psychology research

Computers and music: Bibliography

general

M.V. Mathews and others: The Technology of Computer Music (Cambridge, MA, 1969)

J. Reichardt, ed.: Cybernetic Serendipity: the Computer and the Arts (New York, 1969)

H.B. Lincoln, ed.: The Computer and Music (Ithaca, NY, 1970)

J. Watkinson: The Art of Digital Audio (Stoneham, MA, 1989)

H. Schaffrath, ed.: Computer in der Musik: über den Einsatz in Wissenschaft, Komposition, und Pädagogik (Stuttgart, 1991)

D.S. Davis: Computer Applications in Music: a Bibliography (Madison, WI, 1988); suppl.1, vol.i (Madison, WI, 1992)

P. Desain and H. Honing: Musc, Mind and Machine (Amsterdam, 1992)

A. Marsden and A. Pople, eds.: Computer Representations and Models in Music (London, 1992)

J.H. Paynter and others, eds.: Companion to Contemporary Musical Thought (London, 1992)

H. Kupper: Computer und Musik: mathematische Grundlagen und technische Möglichkeiten (Mannheim, 1994)

C. Roads and others: The Computer Music Tutorial (Cambridge, MA, 1996)

E. Selfridge-Field, ed.: Beyond MIDI: the Handbook of musical Codes (Cambridge, MA, 1997)

R.L. Wick: Electronic and Computer Music: an Annotated Bibliography (Westport, CT, 1997)

Computers and music: Bibliography

electronic databases

Some items on CD-ROMs may also be available via the World Wide Web; for copyright reasons others are available only to single users via a fixed medium. Web addresses are subject to change.

Renaissance Liturgical Imprints: a Census [RELICS] (Ann Arbor, U. of Michigan, 1983; D. Crawford [books printed between 1450 and 1600; 〈www-personal.umich.edu/~davidcr〉]

J.W. Hill and T. Ward: Two Relational Databases for Finding Text Paraphrases in Musicological Research’, Computers and the Humanities, xxiii/4 (1989), 105–11

S. Pinegar: Thema’, Computing in Musicology, viii (1992), 11–18 [database of scribal information concerning 13th-century music theory treatises; 〈www.uga.edu/~thema〉]

E. Selfridge-Field: Using Networks in Musical Research’, Computing in Musicology, viii (1992), 33–54

J. Stinson: The SCRIBE Database’, Computing in Musicology, viii (1992), 65 [database of more than 6000 encoded pieces of music from the Middle Ages and Renaissance; 〈adu1.adu.latrobe.edu.au/Music/Scribe.html〉]

Thesaurus Linguae Graecae, version D (Irvine, U. of California, 1992) [Gk texts on all subjects through the 6th century ce; suppl. Canon of Greek Authors and Works, ed. L. Berkowitz and K.A. Squitier, New York, 3/1990]

Packard Humanities Institute CD-ROM 5.3 (Los Altos, CA, Packard Humanities Institute, 1993) [classical writings, chiefly Lat., and biblical texts in ancient languages up to 200 ce]

H. Schaffrath: Einhundert chinesische Volkslieder: eine Anthologie (Berne, 1993)

Thesaurus Musicarum Latinarum (Bloomington, Indiana U., School of Music, 1993; T.J. Mathiesen) [Lat. music theory, 4th–16th centuries; suppl. Canon of Data Files, 1995–; incl. R. Steiner's CANTUS database of Gregorian chant indexes]

P. Elliott: Beethoven Bibliography Online’, Computing in Musicology, ix (1993–4), 51–2

T.J. Mathiesen: Transmitting Text and Graphics in Online Databases: the Thesaurus Musicarum Latinarum Model’, Computing in Musicology, ix (1993–4), 33–48

L.A. Rothfarb: Music Theory Online’, Computing in Musicology, ix (1993–4), 54–7 〈boethius.music.ucsb.edu/mto〉

H. Schaffrath: The EsAC Electronic Songbooks’, Computing in Musicology, ix (1993–4), 77–8

E. Selfridge-Field: The MuseData Universe: a System of Musical Information’, Computing in Musicology, ix (1993–4), 11–30 [electronic scores for standard repertory of the 18th and 19th centuries 〈www.ccarh/org/databases/musedata〉]

A. Hughes: Late Medieval Liturgical Offices: Resources for Electronic Research (Toronto, 1994) [incl. machine-readable indexes]

The Essen Data Package, version 1.0 [in EsAC format; H. Schaffrath] and The Essen Folksong Collection in the Humdrum Kern Format [D. Huron] (Menlo Park, CA, Center for Computer Assisted Research in the Humanities, 1995) [6000 folksongs from Ger. speaking lands with title indexes and other reference material]

R.F. Judd: Tools for Musical Scholarship on the World-Wide Web’, Computing in Musicology, x (1996), 79–102

F. Wiering: Italian Music Treatises on CD-ROM’, Computing in Musicology, x (1996), 183–8

Thesaurus Musicarum Italicarum, i: Gioseffe Zarlino: Music Treatises (Utrecht U., Department of Computers and Humanities, 1997; F. Wiering) [multimedia CD-ROM with full searchable text of all Zarlino's writings]

Computers and music: Bibliography

composition and synthesis

L.A. Hiller jr and L.M. Isaacson: Experimental Music: Composition with an Electronic Computer (New York, 1959)

L.A. Hiller and R.A. Baker: Computer Cantata: a Study in Compositional Method’, PNM, iii/1 (1964–5), 62–90

C. Roads, ed.: Composers and the Computer (Los Altos, CA, 1985)

C. Roads and J. Strawn, eds.: Foundations of Computer Music (Cambridge, MA, 1985)

S. Emmerson, ed.: The Language of Electroacoustic Music (New York, 1986)

J.B. Barričre: Computer Music as Cognitive Approach: Simulation, Timbre, and Forman Processes’, Contemporary Music Review, iv (1989), 117–30

R. Moog and T. Rhea: Evolution of the Keyboard Interface: the Bösendorfer 290 SE Recording Piano and the Moog Multiply-Touch-Sensitive Keyboards’, Computer Music Journal, xiv/2 (1990), 52–61

D. Huron: The Humdrum Toolkit: UNIX-based Software Tools for Music Representation and Processing (Waterloo, 1991, 2/1994)

D.H. Keefe: Physical Modelling of Wind Instruments’, Computer Music Journal, xvi/2 (1992), 57–73

J. Pressing: Synthesizer Performance and Real-Time Techniques (Madison, WI, 1992)

J. Rothstein: MIDI: a Comprehensive Introduction (Madison, WI, 1992, 2/1995)

R. Rowe: Interactive Music Systems: Machine Listening and Composing (Cambridge, MA, 1992)

J. Woodhouse: Physical Modeling of Bowed Strings’, Computer Music Journal, xvi/4 (1992), 43–56

I. Xenakis: Formalized Music: Thought and Mathematics in Composition (Stuyvesant, NY, 1992)

G. Haus, ed.: Music Processing (Madison, WI, 1993)

P.D. Manning: Electronic and Computer Music (Oxford, 2/1993)

J.S. Smith: Efficient Synthesis of Stringed Musical Instruments’, International Computer Music Conference: San Francisco 1993, 64–73

M.L. Bauman: Early Computer Sound and Music Synthesis at Bell Telephone Laboratories (diss., U. of Rochester, 1995)

C. Roads and J. Strawn, eds: The Computer Music Tutorial (Cambridge, MA, 1996)

J. Chdabe: Electronic Sound: the Past and Promise of Electronic Music (Upper Saddle River, NJ, 1997)

C. Dodge and T.A. Jerse: Computer Music Synthesis, Composition and Performance (New York, 2/1997)

C. Roads, ed: Music Signal Processing (Lisse, 1997)

E.R. Miranda: Computer Sound Synthesis for the Electronic Musician (Oxford, 1998)

T. Winkler: Composing Interactive Music: Techniques and Ideas using MAX (Cambridge, MA, 1998)

R.C. Boulanger, ed: The Csound Book: Perspectives in Software Synthesis, Sound Design, Signal Processing and Programming (Cambridge, MA, 1999)

:

For further bibliography see Electro-acoustic music.

Computers and music: Bibliography

music theory and analysis

M. Babbitt: The Use of Computers in Musicological Research’, PNM, iii/2 (1964–5), 74–83

A. Forte: A Program for the Analytical Reading of Scores’, JMT, x (1966), 330–64

R.F. Erickson: Music Analysis and the Computer’, JMT, xii (1968), 240–63

T. Winograd: Linguistics and the Computer Analysis of Tonal Harmony’, JMT, xii (1968), 2–49

N. Böker-Heil: Plotting Conventional Music Notation’, JMT, xvi (1972), 72–101

S.W. Smoliar: Process Structuring and Music Theory’, JMT, xviii (1974), 308–37

R.F. Erickson: DARMS: a Reference Manual (New York, 1976)

I. Bent and J. Morehen: Computers in the Analysis of Music’, PRMA, civ (1977–8), 30–46

A. Bertoni and others: A Mathematical Model for Analyzing and Structuring Musical Texts’, Interface, vii (1978), 31–43

S.W. Smoliar: A Computer Aid for Schenkerian Analysis’, ACM National Conference (New York, 1979), 110–59

M. Ellis: Linear Aspects of the Fugues of J.S. Bach's ‘The Well-Tempered Clavier’: a Quantitative Survey (diss., U. of Nottingham, 1980)

H.S. Powers: Language Models and Computer Applications’, EthM, xxiv (1980), 1–60

D.A. Byrd: Music Notation by Computer (diss., Indiana U., 1984)

J. Kippen and B. Bel: Linguistic Study of Rhythm: Computer Models on Tabla Language’, International Society for Traditional Arts Research: Newsletter, ii (1984), 28–33

A.R. Brinkman: Representing Musical Scores for Computer Analysis’, JMT, xxx (1986), 225–75

H. Charnassé and B. Stépien: Automatic Transcription of Sixteenth Century Musical Notations’, Computers and Humanities, xx (1986), 179–90

S.T. Pope: Music Notations and the Representation of Musical Structure and Knowledge’, PNM, xxiv (1986), 156–89

A. Forte: New Approaches to the Linear Analysis of Music’, JAMS, xli (1988), 315–48

K. Vaughn: The Music Mapper: a Computer Application for Performance Based Interpretation of Cultural Variance in digitized Patterns of Melody and Rhythm (diss., UCLA, 1988)

E. Selfridge-Field, ed.: Software for Theory and Analysis’, Computing in Musicology, vi (1990), 112–24

B. Jesser: Interaktive Melodieanalyse: Methodik und Anwendung computer-gestützter Analyseverfahren in Musikethnologie und Volksliedforschung: typologische Untersuchung der Balladensammlung des DVA (Berne, 1991)

F. Chin and S. Wu: An Efficient Algorithm for Rhythm-Finding’, Computer Music Journal, xvi/2 (1992), 35–50

A. Marsden and A. Pople: Representations and Models in Music (London, 1992)

T.A. Nord: Toward Theoretical Verification: Developing a Computer Model of Lerdahl and Jackendoff's Generative Theory of Tonal Music (diss., U. of Wisconsin, 1992)

R.B. Dannenberg: Musical Representation Issues, Techniques, and Systems’, Computer Music Journal, xvii/3 (1993), 20–30

B. Pennycook and others: Toward a Computer Model of a Jazz Improviser’, International Computer Music Association: San Francisco 1993, 228–31

H. Schaffrath: Musikalische Analyse und Wissenschaftssprache’, Musikometrika, v (1993), 91–105

S.M. Schwanauer and D.A. Levitt, eds.: Machine Models of Music (Cambridge, MA, 1993)

E. Selfridge-Field: Music Analysis by Computer: Approaches and Issues’, Music Processing, ed. G. Haus (Madison, WI, 1993), 3–24

P. Castine: Set Theory Objects: Abctractions for Computer-Aided Analysis and Composition of serial and Atonal Music (New york, 1994)

D.K. Simonton: Computer Content Analysis of Melodic Structure: Classical Composers and their Compositions’, Psychology of Music, i (1994), 31–43

D.H. Cope: Experiments in Musical intelligence (Madison, WI, 1996)

Computing in Musicology, x (1996) [incl. articles by I. Braus, W.B. Hewlett, A. Kornstädt, D.S. Ó Maidiń, G. Mazzola, o. Zahorka and T. Noll, J. Rhodes, W.F. Thompson and M. Stainton]

W.B. Hewlett and E. Selfridge-Field, eds.: Melodic Similarity: Concepts, Procedures, and Applications’, Computing in Musicology, x (1998) [incl. articles by D. Bainbridge, D. Cope, T. Crawford, C.S. Iliopoulos and R. Raman, C. Cronin, D. Hoernel, J. Howard, A. Kornstädt, N. Nettheim, H. Schaffrath and E. Dahlig, E. Selfridge-Field, M. Yako]

Computers and music: Bibliography

historical research

A. Lomax: Folk Song Stly and Culture (Washington DC, 1968)

R. Kluge: Faktorenanalytische Typenbestimmung an Volksmelodien: Versuch einer typologischen Ordnung altmarkischer Melodien – Sammlung Parisius, Stockman u.a. – mit Hilfe eines Rechenautomaten ZRA 1 (diss., Humboldt U., 1969)

M. Baroni and C. Jacoboni: Proposal for a Grammar of Melody: the Bach Chorales (Montreal, 1978)

G. Bowles: The Computer-Generated Thematic Catalogue: an Index to the Pieces of Marin Marais (diss., Stanford U., 1978)

N. Böker-Heil, H. Heckmann and I. Kindermann, eds.: Das Tenorlied: mehrstimmige Lieder in Deutschen Quellen 1450–1580, i: Drucke (Kassel, 1979)

A.R. Brinkman: The Melodic Process in Johann Sebastian Bach's Orgelbüchlein’, Music Theory Spectrum, ii (1980), 46–77

L. Trowbridge: The Fifteenth-Century French Chanson: a Computer-Aided Study of Styles and Style Change (diss., U. of Illinois, 1982)

M. Chen: Toward a Grammar of Singing: Tune-Text Association in Gregorian Chant’, Music Perception, i (1983), 84–122

J.M. Bevil: Centonization and Concordance in the American Southern Uplands Folksong Melody: a Study of the Musical Generative and Transmittive Processes of an Oral Tradition (diss., U. of North Texas, 1984)

L.P. Grijp: Voetenbank: een methode om melodieën te soehen’ [Footbank: a method of finding melodies by text association], TVNM, xxxiv/1 (1984), 26–48

R. Vendome: The Calculation and Evaluation of Keyboard Temperaments by Computer’, International Computer Music Conference: Paris 1984, 227–42

A. Hughes: Memory and the Composition of Late Medieval Office Chant: Antiphons’, L'enseignement de la musique au Moyen-Age et ŕ la Renaissance: Royaumont 1985, 53–72

H.S. Powers: Tonal Types and Modal Categories in Renaissance Polyphony’, JAMS, xxxiv (1985), 428–70

J.K. Williams: A Method for the Computer-Aided Analysis of Jazz Melodies in the Small Dimensions’, Annual Review of Jazz Studies, iii (1985), 41–70

L. Trowbridge: Style Change in the Fifteenth-Century Chanson’, JM, iv (1985–6), 146–70

K. Ebcioğlu: An Expert System for Harmonizaton of Chorales in the Style of J.S. Bach (Buffalo, NY, 1986)

N. Cook: Structure and Performance Timing in Bach's C-major Prelude (WTC I): an Empirical Study’, MAn, vi (1987), 257–72

M. Leppig: Musikuntersuchungen im Rechenautomaten’, Musica, ii (1987), 140–50

E. Lubej: Quantitative Methoden in der vergleichend-systematischen Musikwissenschaft: automatische Intonationsanalyse über die multimodale Verteilung und deren statistisch-geographische Auswertung am Beispiel der Gesänge der Tenores aus Sardinien’, Musicologica austriaca, vii (1987), 129–55

J. LaRue: A Catalogue of Eighteenth-Century Symphonies, i: Thematic Identifier (Bloomington, IN, 1988)

H.B. Lincoln: The Madrigal and Related Repertories: Indexes to Printed Collections, 1500–1600 (New Haven, CT, 1988)

S.D. Page: Computer Tools for Music Information Retrieval (diss., U. of Oxford, 1988)

A.B. Wenk: Parsing Debussy: Proposal for a Grammar of his Melodic Practice’, Musikometrika, i (1988), 237–56

B. Alphonce: Computer Applications: Analysis and Modeling’, Music Theory Spectrum, xi/1 (1989), 49–59

M.V. Mathews and J. Pierce, eds.: Current Directions in Computer Music Research (Cambridge, MA, 1989)

M. Baroni and L. Callegari: Analysis of a Repertoire of Eighteenth-Century French Chansons’, Musikometrika, ii (1990), 197–240

A.R. Brinkman: Pascal Programming for Music Research (Chicago, 1990)

D. Halperin: A Segmentation Algorithm and its Application to Medieval Monophonic Music’, Musikometrika, ii (1990), 107–19

G. Mazzola: Geometrie der Tone (Basle, 1990)

S.W. Yi: A Theory of Melodic Contour as Pitch-Time Interaction: the Linguistic Modeling and Statistical Analysis of Vocal Melodies in Selected Lied Collections of Schubert and Schumann (Ann Arbor, MI, 1990)

D. Cope: The Computer and Musical Style (Madison, WI, 1991)

W.B. Hewlett and E. Selfridge-Field: Computing in Musicology, 1966–1991’, Computers and the Humanities, xxv (1991), 381–92

H.M. Binford-Walsh: The Melodic Grammar of Aquitanian Tropes (diss., Stanford U., 1992)

D. Charlton: Opéra-Comique and the Computer’, Grétry et l'Europe de l'opéra-comique, ed. P. Vendrix (Ličge, 1992), 367–78

A. Nuńez: Informatica y electronica musical (Madrid, 1992)

H. Lincoln: The Latin Motet: Indexes to Printed Collections, 1500–1600 (Ottawa, 1993)

G. Mazzola and O. Zahorka: Tempo Curves Revisited – Hierarchies of Performance Fields’, Computer Music Journal, xv/2 (1993)

U. Pape and A. Schirge: Die Orgeln des Kirchenkreises Belzig-Niemegk: ein Beispiel für die computergestützte Dokumentation von Orgeln’, Konservierung und Restaurierung historischer Orgeln in den neuen Bundesländern (Berlin, 1993), 57ff

J.A. Bowen: A Computer-Aided Study of Conducting’, Computing in Musicology, ix (1994), 93–103

I. Braus: Retracing One's Steps: an Overview of Pitch Circularity and Shepard Tones in European Music, 1550–1990’, Music Perception, xii (1994–5), 323–51

U. Berggren: ARS Combinatoria: Algorithmic Construction of Sonata Movements by means of Building Blocks derived from W.A. Mozart's Piano Sonatas (Uppsala, 1995)

S.A. Sanford: A Comparison of French and Italian Singing in th eveteenth Century’, Journal of Seventeenth-Century Music, i/1 (1995) 〈www.SSCm.harvard.edu/jsCm

D. Cope: Experiments in Musical Intelligence (Madison, WI, 1996)

H. Gottschewski: Die Interpretation als Kunstwerk: musikalische Zeitgestaltung und ihre Analyse im Beispiel von Welte-Mignon Klavieraufnahmen aus dem Jahre 1905 (Laaber, 1996)

J.-D. Humair: Performance of Musical Rhythm: an Analysis of Real Polyphonic Examples’, Computing in Musicology, x (1996), 92–193

D. Huron: The Melodic Arch in Western Folksongs’, Computing in Musicology, x (1996), 3–23

A.C. Lehmann and others: VIBAFIN: a Tool for Capturing Fingerings in Piano Performance’, Computing in Musicology, x (1996), 155–62

K.N. Moll: Vertical Sonorities in Renaissance Polyphony: a Music-Analytic Application of Spreadsheet Software’, Computing in Musicology, x (1996), 59–77

E. Selfridge-Field: Bach in the Age of Technology’, Neue Musiktechnologie, ii: Vorträge und Berichte vom KlangArt-Kongress 1993, ed. B. Enders (Mainz, 1997), 133–47

S.A. Reily: The Ethnographic Enterprise: Venda Girls' Initiation Schools Revisited’, BritishJournal of ethnomusicology, vii (1998), 45–68

Computers and music: Bibliography

music education

R.L. Allvin: The Development of a Computer-Assisted Music Instruction System to Teach Sight Singing and Ear Training (diss., Stanford U., 1967)

J. Bamberger: Learning to Think Musically: a Computer Approach to Music Study’, Music Educators Journal, lix (1973), 53–7

W.E. Kuhn: Computer-Assisted Instruction in Music: Drill and Practice in Dictation’, College Music Symposium, xiv (1974), 89–101

M.A. Arenson: A Model for the First Steps in the Development of Computer-Assisted Instruction Materials in Music Theory (diss., Ohio State U., 1976)

F. Hofstetter: GUIDO: an Interactive Computer-Based System for Improvement of Instruction and Research in Ear-Training’, Journal of Computer-Based Instruction, i (1974–5), 100–06

G.E. Wittlich: Developments in Computer-Based Music Instruction and Research at Indiana University’, Journal of Computer-Based Instruction, vi/3 (1979–80), 62–71

J.M. Eddins: A Brief History of Computer-Assisted Instruction in Music’, College Music Symposium, xxi/2 (1981), 7–14

J.A. Taylor: The MEDICI Melodic Dictation Computer Program: its Design, Management, and Effectiveness as Compared to Classroom Melodic Dicataion’, Journal of Computer-Based Instruction, ix (1982–3), 64–73

A.K. Blombach: OSU's GAMUT: Semi-Intelligent Computer-Assisted Music Ear Training’, Sixth International Conference on Computers and the Humanities (Rockville, MD, 1983), 14–15

R.N. Killam: An Effective Computer-Assisted Learning Environment for Aural Skill Development’, Music Theory Spectrum, vi (1984), 52–62

S.R. Newcomb: LASSO: an Intelligent Computer-Based Tutorial in Sixteenth- Century Counterpoint’, Computer Music Journal, ix/4 (1985), 49–61

B.K. Bartle: Computer Music Software in Music and Music Education: a Guide (Metuchen, NY, 1987)

R. Nelson and C.J. Christensen: Foundations of Music: a Computer-Assisted Introduction (Belmont, CA, 1987)

J.W. Schaffer: Developing an Intelligent Music Tutorial: an Investigation of Expert Systems and their Potential for Microcomputer-Based Instruction in Music Theory (diss., Indiana U., 1987)

B.B. Campbell: Music Theory Software for the Macintosh’, Journal of Music Theory Pedagogy, ii (1988), 133–62

Musicus: Computer Applications in Music Education (1989–)

L. Landy: Musicology and Computing Science – a New Major at the University of Amsterdam: Problems and Solutions Involved in its Foundation’, Musicus, i (1989), 9–14

J. Rahn and others: Using the “LISP Kernel” Musical Environment’, Musicus, i (1989), 144–63

F. Wiering: Computertoepassingen in de musiekwetenschap (Utrecht, 1989)

G. Wittlich: Computer Applications: Pedagogy’, Music Theory Spectrum, xi/1 (1989), 60–65

G.G. Blount: A Music Curriculum for the Twenty-First Century’, Computers in Music Research, ii (1990), 121–44

J. London: Music Notation Programs in the Theory Classroom and in Research’, Computers in Music Research, ii (1990), 145–70

J.W. Schaffer: A Computer-Aided Approach to Better Student Comprehension of Tonal Melodic Hierarchies’, Musicus, ii (1990), 39–50

M.J. Lorek: Computer Analysis of Vocal Input: a Program that Simulates College Faculty Sight-Singing Evaluation’, Computers in Music Research, iii (1991), 121–38

A. Pople: Computer Music and Computer-based Musicology’, Computers in Education, xix (1992), 173–82

M. Clarke and S. Hunter: Educating the Next Generation: Integrating Technological Skills with Artistic Creativity in Computer Music Courses in Higher Education’, Musicus, iv (1995), 47–52

C. Duffy, S. Arnold and F. Henderson: NetSem: Electrifying Undergraduate Seminars’, Active Learning, ii (1995), 42–8

L. Peterson: Music Literature Instruction and Multimedia: a Delaware Perspective’, Musicus, iv (1995), 53–60

Computers and music: Bibliography

psychology research

R.N. Shepard: Circularity of Judgements of Relative Pitch’, JASA, xxxvi (1964), 2346–53

T. Winograd: Linguistics and the Computer Analysis of Tonal Harmony’, JMT, xii (1968), 2–49

J.-C. Risset: Pitch Paradoxes Demonstrated with Computer-Synthesized Sounds (Murray Hill, NJ, 1970)

S.W. Smoliar: A Parallel Processing Model of Musical Structures (diss., MIT, 1971)

H.C. Longuet-Higgins: The Perception of Melodies’, Nature (1976), no.263, pp.646–53

M. Minsky: Music, Mind, and Meaning (Cambridge, MA, 1981)

H.C. Longuet-Higgins and C.S. Lee: The Rhythmic Interpretation of Monophonic Music’, Music Perception, i (1983–4), 424–41

M. Baroni and L. Callegari, eds.: Musical Grammars and Computer Analysis (Florence, 1984)

E.F. Clarke: Some Aspects of Rhythm and Expression in Performances of Erik Satie's “Gnossienne No.5”’, Music Perception, ii (1984–5), 299–328

M.J. Steedman: A Generative Grammar for Jazz Chord Sequences’, Music Perception, ii (1984–5), 52–77

N. Todd: A Model of Expressive Timing in Piano Music’, Music Perception, iii (1985–6), 33–58

O.E. Laske: Toward a Computational Theory of Music Listening’, Reason, Emotion, and Music, ed. L. Apostel, H. Sabbe and F. Vandamme (Ghent, 1986), 363–92

M. Ligabue: A System of Rules for Computer Improvisation’, International Computer Music Conference: The Hague 1986

S.T. Pope: Music Notations and the Representation of Musical Structure and Knowledge’, PNM, xxiv (1986), 156–89

J.J. Bharucha: Music Cognition and Perceptual Facilitation: a Connectionist Framework’, Music Perception, v (1987–8), 1–30

O.E. Laske: Introduction to Cognitive Musicology’, Computer Music Journal, xii/1 (1988), 43–57

J. Sloboda, ed.: Generative Processes in Music: The Psychology of Performance, Improvisation and Composition (Oxford, 1988)

J.J. Bharucha and K.L. Olney: Tonal Cognition, Artificial Intelligence and Neural Nets’, Contemporary Music Review, iv (1989), 341–56

J.J. Bharucha and P. Todd: Modeling the Perception of Tonal Structure with Neural Nets’, Computer Music Journal, xiii/3 (1989), 44–53

M. Leman, ed.: Models of Musical Communication and Cognition’, Interface, xviii/1–2 (1989)

H.C. Longuet-Higgins and E. Lisle: Modelling Musical Cognition’, Contemporary Music Review, iii (1989), 15–27

H. Sano and B.K. Jenkins: A Neural Net Model for Pitch Perception’, Computer Music Journal, xiii/3 (1989), 41–8

R.O. Gjerdingen: Categorization of Musical Patterns by Self-Organizing Neuronlike Networks’, Music Perception, vii (1989–90), 339–70

M. Leman: Emergent Properties of Tonality Functions by Self-Organization’, Interface, xix (1990), 85–106

S.W. Smoliar: Lewin's Model of Musical Perception Reflected by Artificial Intelligence’, Computers in Music Research, ii (1990), 1–37

P. Todd and G. Loy, eds.: Music and Connectionism (Cambridge, MA, 1991)

M. Balaban, K. Ebcioğlu and O. Laske, eds.: Understanding Music with AI: Perspectives on Music Cognition (Cambridge, MA, 1992)

B. Bel: Modelling Improvisational and Compositional Process’, Languages of Design, i (1992), 11–26

D. Rosenthal: Emulation of Human Rhythm Perception’, Computer Music Journal, xvi/1 (1992), 64–76

E. Clarke: Expression in Performance: Generativity, Perception and Semiosis’, The Practice of Performance: Studies in Musical Interpretation, ed. J. Rink (Cambridge, 1995), 21–54

M. Leman: Music and Schema Theory: Cognitive Foundations of Systematic Musicology (Berlin, 1995)