22
Report on the IRCAM Conference: The Composer and the Computer Author(s): C. Roads Source: Computer Music Journal, Vol. 5, No. 3 (Autumn, 1981), pp. 7-27 Published by: The MIT Press Stable URL: http://www.jstor.org/stable/3679982 . Accessed: 24/11/2013 20:45 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . The MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Computer Music Journal. http://www.jstor.org This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PM All use subject to JSTOR Terms and Conditions

The Composer and the Computer

Embed Size (px)

Citation preview

Page 1: The Composer and the Computer

Report on the IRCAM Conference: The Composer and the ComputerAuthor(s): C. RoadsSource: Computer Music Journal, Vol. 5, No. 3 (Autumn, 1981), pp. 7-27Published by: The MIT PressStable URL: http://www.jstor.org/stable/3679982 .

Accessed: 24/11/2013 20:45

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

The MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Computer MusicJournal.

http://www.jstor.org

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 2: The Composer and the Computer

C. Roads Report on the IRCAM

Conference: The

Composer and the

Computer

Introduction

The Institut de Recherche et Coordination Acous- tique/Musique (IRCAM) conference on "The Com- poser and the Computer" was held 17-21 February 1981 in Paris (Fig. 1). The conference tied together a number of activities, including lectures, symposia, concerts, a public debate, excursions to other stu- dios, and sundry informal get-togethers. The focus of the conference was set by several questions posed at the start:

1. What, if anything, is specific and unique about work with the computer and how does this influence our perception of the world?

2. How do these characteristics influence our thoughts about music and about musical composition?

3. How has each composer's work changed thus far because of contact with this machine?

4. What are the most fruitful research goals that would lead eventually to a fuller, more creative use of the computer for musical composition?

5. What is our vision of the role that the com- puter should/could/will play in our musi- cal lives in the near future and in the more distant future?

These questions were interjected into discussions throughout the week.

Tuesday, 17 February

Lecture by Pierre Boulez: Material and Composition

Tod Machover of IRCAM was responsible for or- ganizing the conference, and he set the tone with his opening remarks. He then introduced the first lecturer, the Director of IRCAM, Pierre Boulez. For Boulez, computers are first of all an alternative to the limits (acoustic and grammatical) imposed by traditional instruments. Even so, computer systems are still difficult to adapt to musical purposes be- cause they were not, in general, designed for such uses. Boulez then distinguished between two op- posite types of computer music systems: (1) com- monly known, more general musical systems and (2) unique, specialized music systems. Each type has advantages and disadvantages and each presents a dilemma that is not new. He pointed out that common instruments like the violin can be played by many people at the cost of constraining the or- chestra to a certain repertoire of sounds. On the other hand, he said, more specialized instruments can produce unique sounds but they are often inaccessible.

According to Boulez, the machine has enabled a change in sound manipulation but the possibilities for the manipulation of compositional structure are at least as important and perhaps more so. In Boulez's vision of the computer as a compositional aid, demonstrated in several examples, the machine is a kind of musical calculator. The composer en- ters some musical material into the machine and specifies some transformation of that information. The computer then carries out symbolic computa- tions over collections of raw musical material, sav- ing the composer untold labor. In Boulez's scenario,

Computer Music Journal, Vol. 5, No. 3, Fall 1981, 0148-9267/81/000007-21 $05.00/0 C 1981 Massachusetts Institute of Technology.

Roads 7

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 3: The Composer and the Computer

Fig. 1. Reception area, in- terior of IRCAM. (Photo- graph by Serge Korniloff, 12, avenue Reille, 75014 Paris.)

the composer retains complete control, like an en- gineer working on numerical calculations by machine.

In Boulez's view, the machine permits us to re- think musical material and its functionality. For ex- ample, we can specify a sequence of sounds and have the machine play it in a series of different tun- ings. We can explore sounds not possible with tra- ditional instruments, such as arbitrary forms of polyphony and monody in microtonal domains. He isolated three objects of musical research: text (the score), material (the orchestration), and performance.

Boulez then discussed factors of rhythmic organi- zation as an instance of the relation of composition to perception. He pointed out one product of the use of integral serial composition techniques in the 1950s: some compositions featured a different tempo and time signature for each measure. As a result, one's sense of the proportion between, say, quarter notes in successive measures was de- stroyed. What was perceived was only indirectly re- lated to what was notated. Perception tends also to ignore small variations in familiar objects-the known percept persists. For example, people tend to hear a rhythmic proportion of 5/11 as 5/10.

Boulez ended his lecture with an exposition of some musical predilections, emphasizing moments of transformation as aesthetic peaks. He expressed

an interest in the birth and death of musical pro- cesses, not the implied boredom of their intermedi- ate phases.

Dialogue

Tod Machover then engaged Boulez in a dialogue. How is composition affected by the participation of the computer? According to Boulez, composition is not directly changed by the introduction of the computer into the compositional environment. The computer merely embodies a kind of feedback to composers that composers already have in their mind. However, said Boulez, the machine can also be used as a high-speed calculator of musical forms.

David Wessel of IRCAM commented on certain "lifeless" computer performances, a situation that commonly occurs when a composer new to com- puter technology transcribes a traditional pitch- time score for digital realization. Wessel alluded to some phrasing algorithms recently developed by Dexter Morrill that make computer scores sound "livelier." (Apparently the well-known jazz critic and scholar Andre Hodeir has also been experi- menting with these algorithms.) For Wessel, in- tegrating a performance model into a score is important. Boulez responded by citing the highly individual nature of human performance. Even with computer music, Boulez prefers to retain aspects of an individual's performance style.

Marvin Minsky of the Massachusetts Institute of Technology (M.I.T.) then suggested that perhaps performance variables could be left to the listeners of computer music. Boulez countered that the lis- tener is not familiar with a piece and thus cannot be entrusted with performance control. Brian Fer- neyhough pointed out that, even so, the listener was expected to understand the music. This inter- action brought up one of the most important ques- tions that surfaced at the conference: What role is the listener/audience expected to play? Does the composer assume the listener to be a passive recep- tacle, or does the composer assume a more active listener?

To the three categories of research cited by Pierre Boulez (text, material, and performance), Marvin

8 Computer Music Journal

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 4: The Composer and the Computer

Minsky added a fourth: the theoretical and cogni- tive realm. Minsky speculated on why music the- ory is not more advanced by suggesting that the problem with music theorists is that they generate papers (theories) only once every five years or so, when they should be concentrating on intelligent systems that can come up with a theory every five minutes! Minsky also said that the study of great masterworks would not advance any cognitive the- ory of music and that theorists might do better to study children's reactions to simple tunes and find out why they prefer some tunes to others.

Tuesday Evening Concert: Processing of Natural Sounds, Treatment of Form

The Tuesday evening concert, held in IRCAM's Es- pace de Projection (Fig. 2), featured four very dif- ferent works. Each piece exhibited in some way the possibilities opened up by digital processing of mi- crophone-collected (concrete) sound. Tod Machover introduced each piece (if the composer was not present, otherwise the composer spoke), giving a brief description of the technical means and aims of the work to be performed.

The opening piece of the conference was Marc Battier's Verbes comme cueillir. This four-channel tape composition is based on an enigmatic text by the composer. The original speech is obliterated by digital processing at times, leaving only very ab- stract voicelike articulations for highly synthetic sound material. The piece was produced using pro- grams written for the PDP-11/60 at the Groupe de Recherches Musicales (GRM). This suite of pro- grams allowed the composer to apply 30-50 sec- ond-order high-Q filters to the incoming acoustic signal and then mix their outputs at will. Other effects exploited by Battier include recursive delays and specialized reverberation.

In Verbes comme cueillir Battier makes extensive use of distributed spatial techniques by isolating in- dividual lines in separate speakers. For example, in the beginning the voice of singer Jacqueline Sandra is heard speaking the text, while her processed voice is heard in another channel. This quickly boils into a churning blend of sibilants rushing

Fig. 2. Cutaway diagram of IRCAM, with the Espace de Projection on the left.

r 1

C `

i_-'L .. ..

___ __ __ __ _ _ _ j __ _ _ __ _ _ _

from speaker to speaker. Only a voicelike articula- tion remains on some resonating sounds, which fade in and out amidst the sibilance. Then, after a quick transition, only the highest register of filters is heard resonating.

At this point in the piece Battier makes use of an exaggerated whispering effect, as if the vocalist's mouth were being distended beyond its normal lim- its. Clusters of inharmonic glissandi weave in and out, only to be interrupted by an excited exclama- tion by the vocalist. The last three minutes of the work feature inharmonic clusters that fade in and out and from speaker to speaker, gradually thinning out in texture and exhibiting no trace of vocal articulation.

Having heard the two-channel version previously on another system, I was struck by how the struc- ture of the piece was clarified by IRCAM's superb four-channel sound-reinforcement system (eight JBL 4350 loudspeakers, a Neve mixing console, Ampex ATR-100 tape machines, and Studer ampli- fiers). This system was an enormous aid to follow- ing individual lines in many of the concert performances.

Paul Lansky's Six Fantasies on a Poem by Thomas Campion is familiar to readers of Com- puter Music Journal 4(4) since two of the fantasies were included in a soundsheet in that issue. This performance featured two different fantasies, the fifth and sixth of the set. The fifth, Her Ritual, is the most experimental of the group. It involves stretching the linear-predictive-coding technique

Roads 9

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 5: The Composer and the Computer

used throughout the six pieces far beyond its nor- mal limits. The techniques involved, explained by Lansky at the 1980 International Computer Music Conference in New York (Lansky 1981), include extending each word out through delays and elabo- rate spectral transformations, using computers at Princeton University. The violence of the fifth fan- tasy is quelled by the placid sixth, Her Self, a sim- ple reading of the original text backed by a syn- thetic chorus punctuating the discourse with sung chords at key moments. Technical and musical craftsmanship is evident in these two pieces, as it is throughout the set.

Next came Mike McNabb's Dreamsong. This work is an impressive demonstration of the Stan- ford audio processing software and its skillful appli- cation. Continuous transformation of one sound field into another is the dominant musical process in this work, which ends surprisingly with a quota- tion from Dylan Thomas. Dreamsong was well re- ceived by the audience.

Francois Bayle was present to introduce his Ero- spheres as an instance of natural sound processing using the GRM digital system. He explained and gave brief examples of different types of sound pro- cessing. Interestingly, the goal of such techniques, according to Bayle, is to extract the as-yet-unheard acoustic information resident in the natural sound. Using digital techniques, one can carve out sound objects that have been masked by others; one can throw these objects into relief on a new background or take minute instants of sound and elaborate them into large masses. Bayle demonstrated a tech- nique whereby the sound was passed through 49 tuned digital filters, producing a wide-band reso- nant "ringing" of the sound. The source sound could also be passed through a very narrow-band notch filter, or various such filtrations could be mixed to form a rarefication of the original sound. Different forms of reverberation can also be applied simultaneously, a technique that quickly builds a single sound into a very rich texture. Bayle also demonstrated a backward reverberation technique (no doubt inspired by the analog tape method) in which an indistinct reverberated sound is heard converging toward a particular sound object. A work based on such techniques, Erospheres features

an extended contrast between an ambience of natu- ral sounds and highly processed female vocal utter- ances. The piece is divided into three sections: Eros bleu, Eros rouge, and Eros noir.

Tuesday Night: Atelier IRCAM Concert with the Ensemble InterContemporain

The Tuesday night concert (following the evening concert) featured three premieres commissioned by IRCAM. The first piece, Are We? by Thorsteinn Hauksson, was written for members of the En- semble InterContemporain (IRCAM's resident in- strumental ensemble) and tape. For the tape part, Hauksson developed some composition programs that were interfaced to the sound synthesis soft- ware on the PDP-10 computer at IRCAM. Under the tutelege of Gerald Bennett, Hauksson's goal in writing the programs was the implementation of an "inharmonic functionality." This contrasts, for ex- ample, with the harmonic functionality of the ear- lier western tradition. In order to accomplish this goal, it was necessary to keep track of interrelations between individual harmonic partials of single sounds as well as the complex of spectral compo- nents obtained in mixing sounds.

While the idea of the piece is interesting, its re- alization is unfortunately not as intriguing. The mean amplitude of the tape part alone nearly satu- rated this listener's ears, and when this was com- bined with trumpets, trombones, and a bombastic percussion score, it became almost numbing. More variation in density throughout the piece might help the listener appreciate the spectral subtleties embedded in this work.

Jonathan Harvey's Mortuous Plango, Vivos Voco for tape was also realized with the PDP-10 at IRCAM. The main source material for the work is the digitized sound of the composer's son reading the text inscribed on the enormous bell at Win- chester Cathedral: Horas avolantes numero mor- tuos plango; vivos ad preces voco (I count the hours slipping by; I cry for the dead, and I call the living to prayer). The sound of the bell itself is also em- ployed, as are results of vocal synthesis algorithms developed by Gerald Bennet and Xavier Rodet at

10 Computer Music Journal

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 6: The Composer and the Computer

IRCAM. Harvey has based both the pitch structure and the temporal structure of the work on the sound of the Winchester bell.

Mortuos Plango, Vicos Voco commences with a clattering cacophony of tolling bells, under which a boy's chanting voice is heard as the bells fade away. The drone chanting, sometimes shifting up an oc- tave, continues until one large bell tone signals the introduction of a characteristic figure in the piece, an inharmonic motive derived from the bell's spec- trum. The boy's voice is then heard again, this time cloned into multiple versions chanting on single tones at different metric rates.

Then the bell tolls, the tolling is sustained (ar- tificially) and is.interrupted by a fast sequence of vocal/electronic blips that move across the room. After a setting of various glissandi textures (derived from the bell's overtones) the piece reaches a mo- ment of silence. The lone boy's voice enters and is joined by others in a pretty section devoted to har- monic development. The tolling bell returns, its partials weaving in and out of their normal balance in the bell's spectrum. After a rapid editing se- quence, starting with the boy's voice and turning into components from the bell's sound, a single, loud ring is heard. However, for each of the 33 par- tials of the bell's spectrum, the boy's voice is sub- stituted, resulting in the incredible effect of a cho- rus with the articulation and dense inharmonic spectrum of a bell.

The evocative final section of the work places the slowly tolling boy/bell against the regularity and mass of the original Winchester bell. Gradually, the attacks of the Winchester bell are smoothed off, leaving only a distant, throbbing bell-sound to mark the passage of time. I found this piece very effective.

Atemkristall (Third Letter) by Yves-Marie Pas- quet was scored for soprano (Jane Manning), instru- mental ensemble, and tape. The entire work is a setting of five poems by Paul Celan; only the third was presented in this concert. Apparently the com- poser has thoroughly dissected the poems, for the piece consists of an extended succession of spare vocal and instrumental utterances. Some momen- tarily interesting effects are achieved with voice simulation (using the Bennet/Rodet algorithms, as

well as the complex frequency modulation [FM] model developed by John Chowning) and instru- mental simulation (aided by David Wessel and Marc Battier at IRCAM).

Wednesday, 18 February

Visit to the Centre d'Etudes de Mathematiques et Automatiques Musicales

The first event of the day was a group field trip via the Metro to Issy-les-Moulineaux, on the outskirts of Paris, where the laboratory of composer Iannis Xenakis is situated. In recent years he has been ex- ploring interactive compositional methods that sig- nify a shift from the stochastic techniques with which he has long been associated. At his Centre d'Etudes de Mathematiques et Automatiques Musi- cales (CEMAMu), Xenakis and his associates (prin- cipally programmer/engineer Guy Medique and programmer Connie Colyer) have been developing a tool for composition that relies heavily on interac- tive computer graphics techniques.

Using a large, high-resolution (Tektronix) graph- ics tablet and a drawing device (which I will call the marker), the Unite Polyagogique Informatique du CEMAMu (UPIC) allows a composer to draw sound waveforms, amplitude envelopes, and entire scores, and to apply operations to them that can be rapidly built up into very complex sound struc- tures. At the score level, the composer works with individual pages of the score, the length of which can vary up to 1 min. Within a page, the composer has available 1/10th-tone pitch precision, and 1/6000-sec temporal resolution. Up to 34 pages of the score can be defined in the present configura- tion. Sound events can be sampled at either 26 or 38 KHz.

As a demonstration, Xenakis drew a continuous waveform on the tablet; after a moment the wave- form was retraced by the computer on the graphics display screen. He next defined the waveform using breakpoint pairs; again the computer obediently calculated and retraced the waveform. A simple score was defined and then computed. After a short interim, the score was auditioned using the spec-

Roads 11

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 7: The Composer and the Computer

ified waveform. Another feature of the system was that the composite waveform produced by the real- ization of the score could be examined on the dis- play screen. If desired, it could be recycled back into one of the waveform buffers as a source signal. This capability extends to digitized concrite sound files, which can also be used as raw waveform ma- terial in the UPIC system. Waveform editing was then demonstrated, as was digital mixing of already realized scores. It is also possible to edit the script of a previous interactive session, allowing one to pick up where one has left off.

Xenakis played many examples of work done with the UPIC system, including excerpts from his intense and formidable work Mycenes-Alpha (1978). Selections composed by children, dancers, and some blind persons were also auditioned. One entertaining example was the sound generated by a map of Mexico sketched on the graphics tablet! In the crowded conditions of the laboratory, where the sound system was next to the computer system with its ambient noise, it was difficult to judge the musical merit of some of these studies. It appears that these inadequate research conditions have led to some of Xenakis's recent statements denouncing the imbalance of French musical research funding. Indeed, it would be remiss not to mention that Xenakis's planned presence at this conference was voluntarily curtailed after his public protest of these conditions in the French news media.

In any case, demonstrations by a wide variety of people on the UPIC indicate that the actual visual feedback provided by the graphics display is almost a luxury; virtually all user interaction consists of hand motions with the marker on the graphics tab- let. UPIC is basically a tactile and gestural input system that approaches sound much in the same way a sculptor might approach a particularly mal- leable material.

Seminar with Brian Ferneyhough and Stanley Haynes

Brian Ferneyhough is a composer of British origin who has been living and working in West Germany for several years. He is well known in Europe for

writing complicated scores that sometimes inten- tionally exceed the capabilities of performers at- tempting to realize them. This creates a tension that Ferneyhough and his following find interest- ing. He has recently begun a residency at IRCAM in which he is working with Stanley Haynes on realiz- ing ideas for a new composition for clarinet and synthesizer using the 4C Machine designed by G. diGiugno and the 4CED program written by Curtis Abbott. Ferneyhough has never before used a com- puter in any aspect of composition.

Ferneyhough began by discussing his new situa- tion face a l'ordinateur and his opinion of why computer music had seemed too undeveloped to ap- proach until recently. In Ferneyhough's view, com- puter-generated sound was "not yet ready" (a posi- tion with which some in the computer music field might take radical exception). He stated that only in the next few years will computer sound's poten- tial perhaps be proven.

Ferneyhough generates each piece he composes from a predefined "system." However, any particu- lar piece only indirectly reflects such a system. Fer- neyhough's goal is to imbue in the listener a cer- tain "sensibility" through an indirect perception of a work. Marvin Minsky felt, however, that Fer- neyhough's approach was too vague. In Minsky's view, a composer should be concerned with sharper ideas that correspond more directly to categories of musical cognition. This led to a debate, since Fer- neyhough insisted that more indirect methods were central to his compositional approach.

Stanley Haynes discussed some of the perfor- mance aspects of Ferneyhough's work and how the 4C Machine had been considered as a listening de- vice in performance. He pointed out the computa- tional limits of the 4C Machine for pitch-detection algorithms, which makes its use for this purpose not very attractive. Following up on the idea of novel instrumental performance techniques, Pierre Boulez suggested that computer systems could be used in conjunction with multiple-person instru- ments (e.g., a whole ensemble playing one compli- cated instrument). Haynes next talked about the approach taken by Morton Subotnick in his recent experience with the 4C Machine. Subotnick has been working with portable electronic music per-

12 Computer Music Journal

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 8: The Composer and the Computer

Fig. 3. Designer's rendering of the Espace de Projec- tion, a performance space with variable acoustic properties. The reverbera- tion time can be changed

from 0.5 sec to 4.5 sec, de- pending on the ceiling height and the acoustic prisms installed in the walls and ceiling.

X. X

i

- L

.. ....

L LL i- L

a rt-- --Ii-- 74 --~--~-~ b

,Z ._,,

IV

j ;i, \II r.

ip 1,1

formance systems for many years. According to Haynes, Subotnick adjusted very quickly to the 4C/4CED environment and was able to sort out the most realizable compositional ideas without much trouble. Thus he was able to achieve a great deal in just a few weeks of work.

Inevitably, the discussion returned to the subject of compositional philosophy. Patrick Greussay, a re- searcher in computer music and in artificial intelli- gence (AI) at the University of Paris (VIII), asked Ferneyhough if the system he had constructed was an intermediate stage or whether it was the ulti- mate functional goal. Ferneyhough replied that the system was an intermediate stage that ensured unity, the ultimate goal of a composition. Marvin Minsky disagreed with the whole notion of unity since in his view, musical understanding by neces- sity involves constructing diverse mental represen- tations. According to Minsky, most compositions

that seem "unified" achieve this effect through a kind of auditory/cognitive illusion: most of the "loose ends" have been deleted, lending the ap- pearance of unity.

After a general discussion of unity and diversity in compositions, Pierre Boulez again argued that thresholds of transition-moments of a perceived change of focus-were the important element in any composition. Then, citing examples from in- strumental music, he argued that there were no new perceptual thresholds introduced by the use of new sounds.

Wednesday Evening Concert: Compositional Algorithms

The Wednesday concert in the Espace de Projection (Fig. 3) began with a condensed description of the

Roads 13

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 9: The Composer and the Computer

work Qogluotobiisigletmesi by Clarence Barlow. Barlow outlined a few of the compositional ideas that were built into the composing program for this piece. The work behind this composition has been detailed in a comprehensive document recently published by the Feedback Studio Verlag in Cologne (Barlow 1980).

The work is scored for piano, every D, E, F#, and B of the instrument tuned down one quarter-tone. For this performance, the piano was computer syn- thesized from analyses of piano tones. With David Wessel's help, Barlow was able to achieve timbres that resembled piano tones but had smoother am- plitude envelopes, resulting in an interesting piano- like color. The piece lasts 30 minutes, but Barlow presented only an excerpt. In form, the work starts with a single voice, joined eventually by three oth- ers that enter one by one. The organizational prin- ciples used in 7og'luotobiisigletmesi were described in a lecture on Saturday (see "Informal Lecture by Clarence Barlow").

Otto Laske's Terpsichore followed; this is a work derived from the output of G. M. Koenig's PROJ- ECT 1 score synthesis program at the Institute of Sonology in Utrecht. Laske took the PROJECT 1- generated score to the Structured Sound Synthesis Project (SSSP) in Toronto for sound realization. The numerical score was typed into the computer with the sced score editor. Related subscores were then produced that differed from the original in one or several parameters. Thus sced was used to elabo- rate a score which, according to Laske, contained important germinal ideas (Laske 1981). In particu- lar, the composer used such sced operations as mix, time-scale, splice, and prune, in producing variants of the germinal material. With sced, he was able systematically to extract and rearrange motives in the score as well as reorchestrate the motives in dif- ferent sections. After the completion of this work in Toronto, the piece was remixed in a professional recording studio, giving each of the sections dis- tinct equalization "tint" and reverberation/delay characteristics.

Laske's work is divided into three sections; only one was performed here. The piece is strongly pitch oriented and the rhythm often refers to an underly- ing meter. (It can change every few bars, according

to variables set in the PROJECT 1 system.) Motives and phrases generated by PROJECT 1 frequently ap- pear in the midst of various musical contexts, providing an obverted view. Heavy use is also made of a slight delay (applied to the notes at the syn- thesis stage by varying the entry delay parameter), making many events doubly articulated.

My recent work, nscor (1980), was then audi- tioned. The composition of this piece was based on establishing functional musical relationships among a broad palette of mostly digital sound ob- jects. The organization of nscor was worked out intuitively.

The sound objects used in the piece were pro- duced at several computer music studios and stored on tape. Selected objects were then mixed and edited into polyphonic sequences interspersed with silences. Next, the sequences were arranged into sometimes overlapping phrase structures. These phrases were fine-tuned in numerous editing ses- sions. Finally, the multitrack master tape was re- mixed to a two-channel version.

On a large scale, the compositional process in nscor was one of extending events in time. Over the nine-minute course of the work, the durations of events gradually evolve from only a few millisec- onds to several seconds.

The concluding piece in this concert was an ex- cerpt of Jean Pich6's Ange, introduced by Tod Machover. This work was created from digital sound generated at the Center for Computer Re- search in Music and Acoustics (CCRMA) at Stan- ford. Pich6 created a lush and almost hypnotic effect through the use of slowly evolving massed textures.

Thursday, 19 February

Seminar with Hugues Dufourt

Hugues Dufourt is a composer living in France who has come to composition after studying philosophy, evidently the positivist strains in particular. In a very deliberate manner (I was told later this was greatly appreciated by some of the foreigners pres- ent), Dufourt held forth on the introduction of the

14 Computer Music Journal

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 10: The Composer and the Computer

computer into composition as a means of formaliz- ing musical thought. Dufourt isolated two ways of working with the computer. In the first way, the way in which a traditionally oriented musician often works, the computer is used as if it were anal- ogous to a familiar tool (e.g., an instrument). In the second way, the computer is seen as a means of lib- eration from the rigid boundaries imposed by tradi- tional musical thought and practice.

Referring to Leibniz's model of a calcul com- plete, Dufourt argued that the purity of complete formalization holds great potential for innovative composition. Formalization of familiar concepts sets them in a new representation that can be manipulated in new ways. For example, Dufourt pointed out how changes in scope modify the effect of operators applied to a given system; that is, the same operation can have a qualitatively different effect at another scale.

After arguing the need for a new, common musi- cal language Dufourt concluded by pointing out the danger inherent in formalized composition. Modern composers risk creating a communal monologue that has no social function. According to Dufourt, it is necessary to create a new musical language that is not simply based on the machine's capabilities.

Seminar with Pierre Barbaud and Lejaren Hiller

Pierre Barbaud is, with Lejaren Hiller, one of the pioneers of algorithmic composition. He began his talk by pointing out some of the experiments he had carried out in collaboration with D. Brown and G. Klein aimed at creating a new solfege. Some in- teresting examples were played that exhibited a variety of musical behavior and sound material, in- cluding some in which nontraditional scale systems were used. One example from 1969 was scored for 19 instruments. Very rich orchestral clusters pre- dominated. A very different program from 1974 (based on an essentially Markovian stochastic al- gorithm) generated a maniacally modulating multi- voiced harmony. This experiment in automated whimsy is heard each evening over French televi- sion as the theme of the evening news!

Much of this work is based on programs that function as state-transition automata. At each stage of the generative process, certain pitch and rhyth- mic intervals are permissible, using criteria such as maximum intervallic distance allowable and so on. Such constraints form the background logic of the piece. The result is a music that is typically highly melodic in character (no glissandi, for example) and that exhibits complicated cross rhythms.

As a last example, Barbaud played a nine-voiced microtonal study that featured an extremely com- plicated interaction of the voices with multiple rhythms. Despite their local rhythmic complex- ities, the voices were clearly related on the phrase level. In a structure such as this, one begins to get a sense of the tremendous musical potential that composers are beginning to develop using al- gorithmic composition methodologies.

Lejaren Hiller began his lecture by recounting the early history of his research in algorithmic music composition. Hiller was, of course, the first com- poser to tap the computer's potential for musical purposes (in 1955 at the University of Illinois). He described experiments using zero- to fourth-order stochastic processes (Markov chains) to create such works as his famous Illiac Suite (1957) and the Computer Cantata (1963). He then discussed the long path of refinement leading up to his current research, starting with the piece Algorithms I (1968), and leading up through Algorithms 11 (1972) to Algorithms III, which is in progress (Hiller, in press). For Hiller, such programs are a record of compositional thought. He described the pains of assembly-language programming and the agony of converting to another machine and thereby los- ing all of his former programs. Not surprisingly, Hiller recommends high-level languages for com- positional programming.

Dialogue

Hiller does not edit the output of his programs. In- stead, all of his musical decision making is saved for the programming phase of the compositional work. However, I feel it is important to recognize that compositional procedures can also be embed-

Roads 15

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 11: The Composer and the Computer

ded in an interactive system. This occurs when score synthesis and editing are bound up into a complete composing-task environment. In such systems, musical decisions are distributed, being embedded in (1) encoded procedures (compositional algorithms), (2) the musical data base on which the procedures operate, and (3) the strategy employed by the musician using the system.

There was a question about composers' use of ran- domness from Marvin Minsky. In Minsky's view, randomness is a substitute for a deeper theory of composition. Hiller took the stance that without randomness, the machine could not compose, but only transform. Minsky argued that very deep non- deterministic systems, such as those used in AI programs, produce output that is not predictable and that does not resort to randomness as an escape route. In place of randomness there is another layer of theory.

Thursday Concert: New Materials, New Forms

John Melby's Chor der Stein began the Thursday concert. In Chor der Stein, densely embedded spec- tra are used to evoke a strong sense of musical di- rectionality, leading ultimately to a peak. This work, Melby's latest in a long series of computer- synthesized pieces, demonstrates mastery of Barry Vercoe's Music 360 synthesis program.

James Dashow's Conditional Assemblies, real- ized at the Centro Sonologia Computazionale of the University of Padua, Italy, explores the notion of modulation spectra used as chords. Specific pitches in diads or triads generate their own "harmoniza- tions," which are actually numerically inharmonic with respect to the generating pitches. FM, ampli- tude modulation (AM), and ring modulation (signal multiplication) were all used to reach this goal (Dashow 1980).

In large-scale form, Dashow says, the piece has a "pacing comparable to a four-movement sym- phony." Within each of the linked movements, the writing is intelligent and rich in musical detail. Contrasting rhythms, abrupt juxtapositions, iso- lated glissandi, florid counterpoint, and smoothly blending spectra are applied with great skill. The-

matic unit is assured through the use of a limited collection of "generating chords" that are heard with various spectral embellishments. The use of selected reverberation to contrast simultaneously sounding "distant" and "present" sounds is very effective. The ring-modulated passages are reminis- cent of such moments in Koenig's Funktionen se- ries. Conditional Assemblies is worthy of repeated listening.

Atmen Noch by Teresa Rampazzi followed Dash- ow's piece. Based on FM synthesis techniques, this carefully unfolding study slowly interweaves complex FM spectra into a continuous fabric. It evokes an almost impressionistic ambience with its smoothly blurred, overlapping textures.

The next piece was M. Graziani's impressive Winter Leaves, realized at Padua. It is a contrapun- tal study in purely synthetic sound, namely, added and multiplied sine waves. The microsound rela- tionships among these components were deter- mined using procedures defined and implemented by the composer. The raw objects were then pro- cessed with elaborate spatial and reverberation schemes, yielding a highly artificial yet refined and palatable product.

Le Souffle du Doux by Daniel Arfib of Marseille is an interesting excursion through various aspects of the sound synthesis technique known as wave- shaping, or nonlinear distortion. This piece calls for a different kind of listening in that it exhibits and calls forth a fascination with sound itself, apart from the structure in which it functions. Arfib often used the waveshaping technique to scan slowly the harmonics of the fundamental tone. "Breaths" of pitched noise are interspersed, also ascending ac- cording to the harmonic series.

The structure of the work is quite simple; how- ever, the apperception of the overall form is prob- ably secondary to appreciation of Le Souffle du Doux. Arfib is fond of repeating a process (e.g., a noise slowly transforming into a pure pitch) for each of the harmonics of a fundamental in succes- sion. This repetitive processing is the dominant structural principle of the piece. The work unfolds at a slow, deliberate pace consistent with Arfib's goal of allowing the listener to tune in to each sound.

16 Computer Music Journal

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 12: The Composer and the Computer

Next came Robin Heifetz's composition Waste- land, which was realized at Colgate University us- ing Music 10. Heifetz inserted inharmonic spectra and noise elements into a traditional arch form.

The concert concluded with Jean-Claude Risset's sensitive Songes, a work in which an instrumental ensemble has been digitized and then processed us- ing digital mixing and editing techniques. Songes begins with little instrumental motives, trills, and arpeggios, played once by an instrument (e.g., clar- inet) then heard echoed at various points in coun- terpoint with other instruments (harp, flute, oboe, chimes). These motives are mapped onto very ac- tive spatial envelopes-as a flute trills, it zooms around the hall. Dense (presumably synthetic) inharmonic tones gradually enter as the work progresses.

Three minutes into the work, the listener is im- mersed in one of those lush, rolling, inharmonic soundscapes that so characterize Risset's style. Fol- lowing this development the instrumental figura- tion returns, this time in a dialogue with the syn- thetic sounds. The synthetic sounds dominate, building to a massively dense and loud cluster that ultimately evaporates. This gesture of evaporation has a very interesting morphology and it is carried off with great finesse by the composer. In the final section, the work tapers off into a drone, occasion- ally accompanied by a breathy whistling sound. This gradually relaxes into silence.

Friday, 20 February

Seminar with Marvin Minsky: Artificial Intelligence and Music

Marvin Minsky made the introductory observation that during the week he had seen composers search- ing for some solution to their problems through in- teraction with computer systems. However good an idea this might seem, Minsky said, these compos- ers did not really know what their problems were! He said the human intellect is not well understood and took particular issue with the assumption that the mind works through a process of logical deduc-

tion. In Minsky's view, logic is an invention of the mind and not a model of how it works.

Professor Minsky's interest in studying music has to do primarily with its cognitive aspects. This entails asking basic and naive but deep questions concerning music's function in the human psyche. For example, Why do composers have music in their heads? Why do they communicate this to other people? How do they communicate this to others? Minsky could offer no single answer to any of these dilemmas, although he did relate several of his theories.

According to Minsky's first theory, the composer is attempting to communicate. The composer is a transmitter, the music carries a message, and the listener is the receiver. The receiver (the listener) is assumed to be a mostly passive receptacle that de- codes and reconstructs a coded message. In Min- sky's second theory, music is the playing out of an emotional script. Hence music transfers emotions from one person to another. A third theory states that music is a useful cognitive activity because it teaches us the geometry of time; that is, how time can be partitioned and metered, compressed, and extended. In listening to music, we exercise the parts of our minds that are sensitive to proportion. Patterns of interconnection are built up-K-lines or mental spiders-that tend to persist and proliferate if we derive aesthetic pleasure from the exercise. According to this theory, a musical society of mind is nurtured by musical activity.

Minsky then sketched an overview of a theory of musical composition. Since composers are creative artists, they have many ideas regarding how to go about composing. How do composers choose how to proceed? This is a very interesting question since it asks, What is the control structure of composi- tional (and indeed, creative) thinking? In Minsky's view the composer begins with some general con- straints (e.g., a vague overall idea of the form of a work) or some specific constraints (e.g., a theme to be developed) and continues to assemble such con- straints until compositional choices are almost easy.

How could such a process be modeled? Minsky said a first attempt might be to use probabilities, like Markov-chain-type systems. In Minsky's view,

Roads 17

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 13: The Composer and the Computer

using probabilities is really a way of postponing deeper thought; they indicate a shallow theory. They should be used only in the beginning phases of such simulations, when very little is known about what nuances and patterns actual human behavior follows. For example, no one in natural language processing today would start with a sto- chastic model of linguistic behavior because these models were tried and surpassed years ago. In Min- sky's view, a good way to train composers would be to give them the rules of a highly constrained system (such as a Beethoven piano sonata) and let them alter the constraints that produce it. An inter- active composing environment would be ideal for this pedagogical method.

Minsky observed that the development of the ul- timate composing program was a very long-term goal. (Presumably this would model a human be- ing's composing competence.) But interesting pro- grams can be written using existing musical knowl- edge. Indeed, Minsky suggested, in the absence of creative procedural constraints we refer to millions of stored experiences. These constitute the "skill" and "knowledge" attributed to senior composers.

He concluded by observing that music is one of the largest industries in the world, with many peo- ple devoted to creating and consuming it, and yet very little work has been done in examining why we like it, that is, what its cognitive bases are and what mental functions it induces. He urged that more research be done.

Seminar with Benedict Maillard and Jean-Francois Allouis: Experiences and Projects of the Groupe de Recherches Musicales

Maillard and Allouis presented an overview of the technical and aesthetic approach pursued by the GRM. This studio and research laboratory, located at French radio headquarters, is the oldest institu- tion for research into electroacoustic music in France. Long based solely on analog technology, and more specifically on pedagogy and practice in mu- sique concrete, the GRM turned in 1978 to digital technology.

Benedict Maillard, not one to shy away from a debate, led off with a statement in which he op- posed what he felt Marvin Minsky had expressed earlier in the day. In Maillard's view, AI can never lead to artistic creation since AI is based on a the- ory of a finished process, the completed work. Maillard proposed that technology be immediately applied to existing artistic creation without further theorizing.

He went on to draw a distinction between his own point of view and that of an earlier speaker, Hugues Dufourt. Maillard said that for himself, the computer was a completely general tool that had no influence on musical language or expression. In this sense, the machine is neutral; it imposes no restric- tions and offers no help to the composer.

This viewpoint was reflected in Maillard's de- scription of the current GRM technology. At GRM a PDP-11/60 minicomputer is used to simulate a sort of enhanced analog studio. The computer can transform and manipulate prerecorded sound files, and direct digital recording and mixing are possible. Filter software (described in the section on the Tuesday evening concert) is also made available to the composer. Even though the system simulates existing facilities, Maillard admitted that new and unforeseen possibilities had been introduced by computer technology and that these had in turn in- fluenced the composer's imagination.

Jean-Francois Allouis described the composing environment of the GRM computer system. He said the system had been designed to be used by composers with absolutely no computer science experience. He attacked Music V-like synthesis languages as being far too dependent on user knowledge of technical and computer principles. The ideal at the GRM is to create a "black box" that the composer confronts in an interactive, ques- tion-and-answer way.

Allouis said the software is set up as a sort of menu that asks the composer a progressive set of questions. The computer then proposes some solu- tions from which the composer selects. Allouis ac- knowledged that this scheme links the composer's fate to certain programming decisions, but he ar- gued that this lack of flexibility was more than

18 Computer Music Journal

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 14: The Composer and the Computer

made up for by the advantages of direct and easy access.

Both Allouis and Maillard emphasized that the GRM is a collective organization. They said shared experience is a fundamental part of the institute's progress and music making. They then played sound examples that demonstrated current results from the GRM system.

Friday Concert: Control of Real-Time Processes

Although he has been associated with electronic music in the past (e.g., Time's Enconmium) Charles Wuorinen has only recently been linked to the pos- sibilities provided by computer algorithms and digi- tal sound processing. His Arp 1 is the product of research that attempts to define processes of change in musical parameters that are expressed in the form of computer programs. This piece uses equa- tions for 1/f noise (Gardner 1977). One of the tasks Wuorinen has set for himself is to produce auto- mated musical works that have enough "person- ality" to be judged compositions of the composer. With respect to this goal, Arp 1 succeeds; it does not sound out of place in Wuorinen's ouevre in ei- ther style or expression.

Larry Austin's Protoform contrasted with the Wuorinen work in its hybrid sound synthesis tech- nique. Austin's work also makes use of fractal generators for musical events. In this taped pre- sentation of a real-time work, a not unpleasant im- pression is induced by simple gurgling droplets of sound that change according to clearly perceptible controls.

Her Quiet Witchery by Michael Hinton was also generated by a real-time composing program at Stockholm's EMS studio. Rather than dealing with rhythmic and pitch elements as in the rest of the concert, Hinton focused on timbre. An interesting and subtle situation was set up in which the lis- tener was guided through a series of subtle transfor- mations taking place in the interior of complex sounds.

Roger Meyers's After the Pond was performed live on the Synclavier by Joel Chadabe, his col-

league at the State University of New York at Al- bany. Meyers has, in this piece, produced an unpre- tentious melodic setting for a digital clarinetlike instrument playing dreamy melodies over a delicate tonal background.

As the final work, Joel Chadabe performed his own Scenes from Stevens, a setting of three strophes by the American poet Wallace Stevens. These scenes unfolded as Chadabe determined the performance sequence of stored pitch and rhythmic algorithms. Although Chadabe was limited to small gestures- pushing buttons and twisting a knob on the front panel of the Synclavier-the control he exerted on the musical process was clear. The end result was gentle and light in atmosphere.

Saturday, 21 February

Seminar with John Chowning

Professor John Chowning of the Center for Com- puter Research in Music and Acoustics at Stanford University gave a talk centering on the interrela- tions between science and music. He commenced by pointing out the disparity between certain scien- tific categories and musical perception. For exam- ple, in many scientific problems the phase of a signal is an important property, but in music phase is often unimportant. The time-domain representa- tion (amplitude of a signal versus time) does not preserve the psychoacoustically pertinent informa- tion, while the frequency-domain representation (Fourier spectrum) does.

Chowning then pointed out another scientific area that is of relevance to musicians: the study of streaming effects by psychoacousticians.' Concern- ing fusion of discrete signals into a perceived single sound, Chowning said that little is yet known of this effect. There is, for example, no program that can take a composite signal and unravel the dif-

1. For a discussion of streaming in a musical context, see "Hear- ing Musical Streams" (McAdams and Bregman 1979). Briefly, streaming is the psychoacoustic ability to correlate events (e.g., to identify a sequence of notes as coming from the same source) and, conversely, to distinguish separate lines in a piece of music.

Roads 19

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 15: The Composer and the Computer

ferent lines. He suggested that two sources could fuse through correlated microvariations (e.g., a slight vibrato or tremolo in all the partials of a sig- nal). Through the detection of such correlations, a musically intelligent program would be able to ex- tract all components of a single "voice" regardless of their location in the spectrum of a polyphonic texture. Chowning went on to suggest that com- positional use could be made of such information in that a composer might add microvariations to the components of a computer-synthesized signal to make them fuse, or use a set of different micro- variations to make them stream apart. Chowning cited the work of David Wessel, John Grey, and Steven McAdams as that of a new breed of scien- tists who study such phenomena-psychoacousti- cians intimately aware of the potential inherent in computer sound synthesis.

The next topic of Chowning's lecture was audi- tory perspective. He described how auditory per- spective (the sensing of the loudness, nearness, and physical source of a sound) is affected by the rela- tionship of direct to reverberated sound. Loudness, for example, has three main components: sound pressure, spectrum, and perspective (how close we perceive the sound to be). In natural musical sounds such as a sung tone, the spectrum does not grow in constant proportion to the loudness of the tone. Likewise synthetic tones, if they are to be as rich as natural tones, should be controlled in multiple re- gions, not all of which evolve in the same way or at the same rate. To split a sound into regions, two discrete attacks might be used for a single (fused) tone, with the composer controlling the spectral evolution of each sound separately (e.g., two "note records" in Music V for a single event).

At this point, David Wessel interposed the ques- tion, Why do some composers continue to ignore the physical and psychoacoustical laws just dis- cussed? Chowning ventured that the social pressure to create something new was strong; composers try to be "avant-garde" and set the trend.

Chowning went on to discuss impediments to teaching scientific principles of computer music to technically naive musicians. These impediments include technical jargon, overly rigorous and inap- propriate scientific explanations, and slow turn-

around in experimenting with ideas on the comput- er. As aids to pedagogy he suggested documentation, the availability of technical and scientific people, and concentrated immersion in the use of a computer music system as being most helpful. For Chowning, the bases of computer music knowledge are threefold: (1) programming, (2) signal process- ing, and (3) sound synthesis and psychophysics.

As a pedagogical conclusion, he then gave an in- tuitive explanation of the sampling theorem to show how the concepts underlying it could be ex- pressed without recourse to elaborate mathematical treatment. The same could be done, he said, with such concepts as convolution and reverberation, making them accessible to the musicians who need to know them.

Dialogue

Tod Machover asked how far the machine should be made to come to the composer in order to facili- tate interaction. Chowning's answer was a chal- lenge to musicians: Don't wait. He suggested that musicians who were interested in the field should immediately try to get involved at a level appropri- ate for them. They should learn as much as they can about the machine in order to make it a more willing servant of their compositional needs. Le- jaren Hiller commented that not just composition graduate students should consider computer music: computers could be integrated into curricula much earlier. He cited the case of high school students who have developed programming competence. Tod Machover asked whether Chowning thought com- puter music would at some point be a subject taught to young people, just as the piano or tuba is taught today. Chowning thought not: learning an instrument develops a motor skill, and he feels the music industry will not change.

Informal Lecture by Clarence Barlow

This lecture was devoted to aspects of composi- tional organization behind Barlow's monumental

7ogluotobiisigletmesi. Barlow went into more de-

20 Computer Music Journal

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 16: The Composer and the Computer

tail than he was able to do in his brief remarks to the audience at the Wednesday concert. He first de- scribed the heptatonic tempered scales used in the work and the application of quarter-tone detuning to achieve 84 derived scales. (These include all the standard church modes.) Within the program, either "normal" or "lowered" tones may be chosen. A basic principle of the work is the asymmetry built into the structure of these scales and other genera- tive rules. According to Barlow, this makes it diffi- cult to produce "atonicality" (lack of a feeling of a tonal center) with these rules. In (ogluotobiisiglet- mesi probabilities are used for selecting specific events. There is no local repetition in the work, but the program occasionally lapses into an ostinato setting in which a random series is frozen.

Barlow discussed his theory of indigestibility in some detail. He likened pitch-interval selection to cake-cutting: into how many equal slices could you most easily cut a cake if you could cut any number of slices between two and nine? What is the second easiest number? The third easiest? According to Barlow, a common series of answers would be two, then four, then eight, three, six, nine, five, and at last seven. There might be variations, but the an- swers would almost certainly not be two, three, four, five, six, seven, eight, and nine. So Barlow set about to find a formula that expresses this series, that gives an index combining the "size" and the "indivisibility" of a number, a property that Barlow has termed indigestibility! The musical application of this concept is as follows. In Barlow's theory, the greater the indigestibility of a number, the less pos- sible it is to establish a sense of "rootedness" or harmonic relation to it. Thus one kind of harmonic functionality can be based on measures of indi- gestibility between successive or simultaneous tones.

Another criterion that played a part in the al- gorithmic processes of Qogluotobiisigletmesi was Plomp and Levelt's "pleasantness" property (1965). Plomp and Levelt's theory of "consonance" of chords and their "priority" was also used. "Affinity" criteria were used to relate separate streams of mu- sic. The temporal structure of the work was in- formed by Barlow's theory of "metricity," which concerns the degree to which a note infers a meter.

Barlow worked out the organization of the com- position at Cologne University in West Germany. A CDC Cyber machine computed 2048 tunings, one of which is used in the piece. But digital-to-analog conversion facilities for sound synthesis were un- available on that machine. Using a small PDP-11/ 10 minicomputer at the University's Phonetics In- stitute, Barlow wrote a limited sound synthesis pro- gram. Three months were spent on computing the sound samples. In his one concession to efficiency throughout this monumental effort, Barlow used the limited sampling rate of 8000 Hz, enough band- width to reproduce the pitches but not any interest- ing timbres. This task was completed in February 1979. Two thousand more hours were spent writing the piano score. After further work on a new real- ization at IRCAM, Qogluotobiisigletmesi won the Darmstadt composition prize.

Demonstration of the 4X

The structure of the 4X hardware was the first topic discussed by G. diGiugno and J. Kott, the architects of the system. The 4X is attached to a high-speed bus along with such devices as the MC68000 host computer, memory, and disks. Interestingly, a PDP-11/55 is also attached to the bus, but it is mainly used as a peripheral for its fast floating- point processor! Up to 512 input devices may be used in conjunction with the 4X, which gives some idea of the size and power of the system. Sixteen analog-to-digital converters (ADCs) and 16 digital- to-analog converters (DACs) may be attached to the system, all serviced at a 16 KHz sampling rate ac- cording to the designers.

The basic building block of the 4X is the 4U card, an update of the 4C (Moorer et al. 1979). The 4U, like the 4C, is microprogrammable. Sixteen read- only-memory (ROM) microprograms are available to make each 4U serve a specific function. For ex- ample, the 4U/A emulates the 4A synthesizer de- signed by diGiugno. The 4U/C emulates the 4C Machine, while the 4U/R is a reverberation pro- cessor and the 4U/M specializes in high-speed mul- tiplication. For more technical details on the ar- chitecture of the 4X, see the article by Asta and associates (1980).

Roads 21

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 17: The Composer and the Computer

Next, diGiugno and Kott demonstrated the live synthesis capabilities of the machine. The first ex- ample was a technically impressive cacophony of eight digitally synthesized pieces (Bach, Mozart, Beethoven, and Joplin) playing simultaneously in real time. Next was an example of the use of highly tuned filters (through which a rich sound source such as noise is passed) as a synthesis method. Eight voices were synthesized in real time this way. diGiugno showed how it was possible to modify the articulation of the tones in real time (changing from staccato to legato). The FM index or any other parameter of the sound could also be manipulated. It is also possible to record one's performance on the 4X for later playback on the same machine.

The possibilities of natural sound processing were demonstrated next. Jean Kott positioned him- self at the vibraphones, holding a microphone (con- nected to the 4X) in one hand and a mallet in the other. He demonstrated how the harmonics (includ- ing subharmonics) of the sound coming into the microphone (from the vibraphones) could be gener- ated with only a 0.2-sec delay, effectively producing a real-time analysis and synthesis. He improvised, with the 4X generating fifths (justly tuned) of what was played, or octaves, or thirds (also just), depend- ing on how it was instructed to perform. diGiugno gave a demonstration of his localization algorithm, which relies on phasing to give the impression of moving sound sources. He played a tape through the 4X, which scattered the sound about the room through this phase manipulation. Grabbing the mi- crophone and adjusting the phase rate, diGiugno showed how a high rate of phasing added "gravel" and shakiness to his spoken voice, an effect dubbed by Jean Kott as the "Pepino at 70" technique. It was quite comical, as was intended. A more bizarre technique generated multiple echoes around the room.

In another impressive example, computer graph- ics were used to show the power of the 4X for real- time acoustic analysis. A real-time Fast Fourier Transform (FFT) was performed on some taped re- cordings of classical and popular music fed into the 4X. The last sound example used the resonant filter technique described earlier in a majestic orchestra-

tion of a composition by Corelli. Although the pres- ent state of the system is quite advanced, diGiugno and Kott are developing more optimized signal-pro- cessing software. The 4X is fully exploited in a new composition by Pierre Boulez to be premiered at the 1981 Donaueshinger Musiktage.

Saturday Concert at the Grande Salle of the Centre Georges Pompidou

This concert centered on the works of three compos- ers: John Chowning, Brian Ferneyhough, and Tod Machover. Chowning's Phone is a work for com- puter-synthesized tape. The singing voice is simu- lated so as to allow transition of voicelike sounds into inharmonic and other purely electronic-sound- ing textures (Chowning 1980). Although clothed in appealing sopranolike tones and lush inharmonic clusters, Phone makes no compromises with regard to internal form. Soon into Phone the listener is drawn into a magical sound universe that could only be created with computer techniques. Yet for all its technical virtuosity, this piece is very different in its spareness from Turenas and Stria, two of Chowning's previous computer-synthesized works. In Phone, individual voices are distributed around and about the auditory space; sometimes emerging, sometimes receding into the reverberant distance. Contrasts in density are quite marked in the work; broad-band FM clusters are contrasted against individual vocal gestures. Transitions from a cluster of glissandi to a vocal sound and back are striking; maximum use is made of ambiguities in perception throughout the work. Phone demon- strates great sensitivity to timbral nuance and deft manipulation of sound materials. Chowning says he is continuing to refine parts of the work; fu- ture performances will no doubt reveal even more subtlety.

The next work on the program was Tod Macho- ver's Soft Morning, City!, scored for soprano (Jane Manning), contrabass (Barry Guy), and two-channel tape (generated with various computer systems at IRCAM). The work is a text setting of the final monologue of Finnegan's Wake by James Joyce.

22 Computer Music Journal

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 18: The Composer and the Computer

The first challenge of such a task is to match the complex textual/semantic material with complex musical/semantic material, and in this Machover has succeeded. Soft Morning, City! is a study in ex- pressionistic parallelism, reflecting the layering of the Joyce text. In form, the piece is a continuum, ebbing and flowing between lyrical passages and in- tense, turgid climaxes. The tape part often func- tions to elaborate the soprano part. For example, a word drawn from the text (and sung by the soprano) is often subjected to digital sound editing that rhythmically concatenates the word with itself sev- eral times. This concatenation is injected into the live mix of the soprano and contrabass, forming a lively counterpoint of words sung, past and present. Similar treatment is applied to the contrabass part.

Like Chowning's piece, Soft Morning, City! prob- ably could not have been attempted without com- puter sound processing techniques. Extensive technological resources were employed in its real- ization. Both the 4A and the 4C digital synthesizers were used, as was the PDP-10 system, for a variety of musical functions. For example, pieces of indi- vidual words were replicated and spliced together using a technique of microsound editing. The so- prano and contrabass parts were intermingled using the cross-synthesis technique, in which one sound is used as the driving function for the dynamic spectrum of another sound. Digital reverberation and mixing were also exploited to great effect in this work. These techniques are described in detail in Machover's recent paper (forthcoming).

Both Guy and Manning were appropriately in- tense and virtuosic performers. Manning's ability to maintain intonation within complex polyphonic textures was especially remarkable.

Brian Ferneyhough's Time and Motion Study for electronically amplified cello is informed by a de- cidedly different set of concerns than either the Machover or the Chowning work. To begin with, the piece does not involve a computer or digital sound. As part of the stage setup, the cellist is vir- tually tied into position with a network of pickups and connecting wires.

According to the composer, Time and Motion Study was constructed around the interaction of

two types of material: (1) seven rhythmic formulas and (2) six categories of "articulation." The sound material consists primarily of scraping and sliding sounds punctuated by occasional taps on the cello body and snaps of the strings. Utterances from the vocal tract, picked up by a contact microphone on the cellist's neck, are also used in scattered mo- ments of the work. During the course of the work, the amplified sounds are picked up and recirculated by two delay systems monitored by the composer. As the piece unfolds, it is the cellist's task to re- spond to the ever-recurring delay systems by insert- ing new material into the loop; according to the composer's plan, the pressures of this task intro- duce errors that are a part of the performance pro- cess. In Ferneyhough's words, "The cellist begins a dialogue with the equipment and sounds that tor- ture and frustrate him." I cannot speak for the cel- list, but I know how I felt-the work was overly long. However, the audience was surprisingly re- ceptive, particularly to the strenuous performance by Pierre Strauch, the cellist.

Public Debate: Bayle, Boulez, Chowning, Dufourt, Ferneyhough, Machover, Minsky

The hall emptied after the concert, and many of us who returned were somewhat surprised by the throngs lined up to get into this event. The house was packed and the audience appeared eager to hear the debaters. Tod Machover first presented an infor- mal overview of the events of the past few days. He then led off the discussion by asking the panelists why they had originally become interested in com- puter music. H. Dufourt answered by saying he ap- preciates the feedback computers are able to pro- vide; he said he is interested in using the computer as a kind of psychoanalyst off of which new com- positional ideas can be bounced. B. Ferneyhough countered this by stating that he is more concerned with protecting his compositional identity from the barrage of new information provided by work with a computer. For F. Bayle, the computer is merely an extension of highly developed sound-manipulation techniques.

Roads 23

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 19: The Composer and the Computer

P. Boulez took a more pragmatic stance: he sees the computer as a powerful tool, able to do some things easily and others not so easily. His immedi- ate interest in work with the computer is its ability to transform traditional instrumental sounds while preserving the nuances of the instrumentalist's ges- tures. The computer did not particularly change Boulez's way of thinking, he said, but it did change his method of working. J. Chowning gave a more historical response to Machover's question. He said he became interested in electronic music many years ago, but there were no facilities at Stanford for such work. He read an article by Max Mathews which said that computers could be used for sound synthesis, and decided to use the tools to which he had access-computers at the Stanford AI Labora- tory. He was also pleased to learn that work with a computer would require no special electronics knowledge, only the willingness to learn to program.

A questioner from the audience then asked about the possible role of the computer in 10 or 100 years from now. Will the computer not become one's master? What is the collective effort of workers in computer music research heading toward? Pierre Boulez responded by saying no one is universal; col- lective effort is needed. Tod Machover asserted that composers are searching for tools, not masters, al- though he does assume that machines will become more intelligent.

Another comment from the audience was con- cerned with the communications gap in modern music. Composers cannot predict the effect of their music on listeners who do not know their gram- mar. In the view of this person, the relationship of modern music to social life and to society is grave. The questioner wondered whether there was any future in a music of "pure sound." H. Dufourt took up this challenge by saying that the questioner was obviously ignorant of computer music's history. Al- though it is not very interesting, one can formalize and computerize traditional music writing. Since the Enlightenment it has been recognized that rea- soning is an important and distinctive human fac- ulty, and the computer is merely a reasoning ma- chine. Dufourt concluded by asserting that computer music is not theater; the computer does not have a

body. It is indeed nonhuman, and he implied that this was not necessarily bad and should be ac- cepted. F. Bayle downplayed the importance of the computer in musical life. For him, the computer is simply a new piece of technology for accomplishing established goals. He chided those who work with formal musical systems, criticizing them for some- times being more interested in the formal system than in the resultant music. Composers of the school of musique concrete, he said, already have many problems of form built into their sounds; he implied that no recourse to an "external" system of description was necessary. Surprisingly, there was no comment on this position from the other panelists:

Someone from the audience then asked whether there was a sociologist at IRCAM to measure the effect of computers on the musical world. Marvin Minsky took on this question. He said he feared that putting social workers in with the musicians could lead to a situation in which banal decorative music would be stressed.

Stanley Haynes brought up the issue of large and small computer systems for music. In particular, he pointed out the criticism occasionally voiced that large systems are "elitist." Minsky replied that grand pianos are also very expensive, probably too expensive. In many situations, small electronic key- board instruments may eventually replace them. He observed, however, that big computer systems will always be needed for the larger projects. Bayle then responded, saying that the creative artist needs to work daily and that this clearly makes small sys- tems a more economically accessible alternative.

The next question was, What influence does the computer have on the composer? Ferneyhough re- plied that there is not much of an influence. He feels that, if anything, certain preconceived com- positional ideas are harder to implement with com- puter technology at its present stage than with other means. Pierre Boulez described his interest in real-time work and in particular the task of preserv- ing the "accidentalness" of the spontaneous instru- mental gesture when it is converted to digital form. He reiterated his position that the computer has changed certain of his working methods, but not

24 Computer Music Journal

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 20: The Composer and the Computer

his essential compositional thought. Since it was by then past 10 P.M., T. Machover guided the debate to a close and thanked all the participants.

The public debate was an anticlimax to an other- wise stimulating week. In many instances, the pan- elists simply reasserted positions that had been explained and debated earlier in the week. Of course this was the fifth straight day of conference activities, and participants had been in sessions since 10 A.M. As the debate wore on, it was under- standable that neither the participants nor the au- dience was eager for an extended exercise in con- frontation. Still, seasoned Parisians remarked that the docility of the audience was quite surprising. Perhaps the absence of a heated argument was ap- propriate. As Gerard Cond6 of Le Monde put it: "The composer and the computer: is it necessary to debate it?" (Le Monde, 4 March 1981).

Summary The seminars and concerts went a long way toward answering the questions posed at the outset of the conference. The computer was seen as a unique de- vice by nearly all present, but to some it is unique only by virtue of being faster or more capable of ac- complishing a set task. To others, its uniqueness is precisely in the way it introduces new tasks and possibilities. These others obviously view the pres- ent musical world as being more open.

As to how much the computer affects composi- tional thought, there was a wide range of opinion. The gist of the responses could be summarized as: "It influences me to the degree I let it." This degree varied from composer to composer. It is obvious that computer science training should be consid- ered when evaluating a composer's response to this question. For the composer without computer liter- acy, the machine is seen (with good reason) mainly as an alien mechanism rife with quirky protocol. There is a clear relation between composers' ability to influence the machine and their willingness to allow "it" (usually "it" means their own programs) to influence them. For the composer with some programming inclination or skills, the computer appears to open up possibilities. It is seen as a cre-

ative medium, with the thousands of ideas it al- ready embodies extended and controlled by the composer's programmed ideas.

Despite the variety of opinion about how the computer affects compositional thought, there was universal agreement that a computer system is a unique task environment. Whether it changes one's ideas or not, it does change one's work habits, at least by replacing calligraphy with typing and pointing. This is not all for the better. Interacting with a terminal is not the same as interacting with other musicians. This is one motivation for devel- oping systems that lend themselves to ensemble performance.

Several research goals were repeatedly mentioned as desirable. One of these is real-time pitch detec- tion and processing for live performance work- better ears for computer systems. Another goal is a more intelligent interactive environment for com- puter music composition, akin to the environment created by knowledge-based intelligent assistants that are being developed for such applications as au- tomated programming, circuit design, and medical diagnosis. The development of deeper models of musical cognition (better knowledge about our- selves) was the goal of Marvin Minsky and others. A subtheme of the conference revolved around peo- ple's reactions to the notion of intelligent ma- chines. The presence of the AI and Music Group from M.I.T. helped bring this to the fore.

The role and process of the listener was repeat- edly discussed. Does it make sense to build in structure the listener cannot hear? Although they approached the matter from different angles, it was a point stressed by both John Chowning and Mar- vin Minsky throughout the week. For Chowning, it is a mistake to ignore psychoacoustic principles. Musical acoustic effects intended by a composer will not come off, and attempts to simulate natural effects such as reverberation and moving sound sources cannot succeed unless these principles are taken into account. Minsky was concerned more with the perception of formal structure and he chastised those composers who seemingly become preoccupied with compositional issues having neg- ligible effect on the listening mind.

Roads 25

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 21: The Composer and the Computer

i

:

06#* 0?0- r??j??! ?4

All the conference participants benefited from the efficient and professional organization of this conference by the IRCAM staff. Preprints of papers by conference participants were distributed in the form of a softbound volume at the beginning of the conference (Battier et al. 1981). This volume also contained the concert notes for the entire week. Speakers were given ample time to establish a his- torical context for their work, to make their point and, sometimes, to get themselves into trouble with an actual opinion. Discussion time was built into the sessions. Tod Machover, acting as modera- tor, often sparked discussion with carefully worded questions. He also handled the language problem with aplomb throughout the week. He translated from French to English and back whenever neces- sary-no mean feat. In the debates among Minsky, Dufourt, and Ferneyhough, the linguistic and cul- tural confrontation was jarring, and Machover did an admirable job of ameliorating Whorfian effects. (Whorf was a linguist who believed that some con- cepts do not translate and that languages pro- foundly shape the thoughts they express.) In short, Machover was indispensable.

Like the UNESCO gathering, the focus of this conference was on musical and compositional is- sues (Roads 1978). Neither was the social aspect ne- glected. Concerts were scheduled early (most were at 6:30 P.M., which left evenings free for dinners with new and old friends). The opportunity for meeting fellow composers, exchanging views through discussions, and hearing pieces was ex- tremely valuable. The diversity of the views was re-

freshing. Still, attitudes will continue to evolve and there are always some voices missed, so I hope that more conferences of this type and quality can be organized in the future.

Acknowledgments

I am deeply indebted to Tod Machover and Kevin Jones for sharing their notes on selected events at the conference with me. I would also like to thank Curtis Abbott, William Kornfeld, David Wessel, Clarence Barlow, and John Chowning for their com- ments on and corrections to initial versions of this report. Grateful thanks must also be extended to the CADRs and to their minions.

References

Asta, V. et al. 1980. "The Real-time Digital Synthesis System 4X." Automazione e Strumentazione 28(2): 119-133.

Barlow, C. 1980. "Bus Journey to Parametron." Feedback Papers 21/23:1-124.

Battier, M. et al. 1981. "Le compositeur et l'ordinateur." Paris: IRCAM.

Boulez, P. 1977. "Invention/Recherche." In Passage du XXe sikcle, 1 re partie. Paris: Centre Georges Pompidou.

Chowning, J. 1980. "The Synthesis of the Singing Voice." In Sound Generation in Winds, Strings, and Comput- ers, ed. J. Sundberg and E. Jansson. Stockholm: Royal Swedish Academy of Music, pp. 4-14.

Dashow, J. 1980. "Spectra as Chords." Computer Music Journal 4(1):43-52.

Gardner, M. 1977. "Mathematical Games: White and Brown Music, Fractal Curves, and One-over-f Fluctua- tions." Scientific American 38(4):16-31.

Hiller, L., in press. "Composing with Computers: A Prog- ress Report." Computer Music Journal 5(4).

Lansky, P. 1981. "Imagination and Linear Prediction." In Proceedings of the 1981 International Computer Mu- sic Conference, ed. H. S. Howe, Jr. San Francisco: Com- puter Music Association, pp. 379-381.

Laske, 0. 1981. "Subscore Manipulation as a Tool for Compositional and Sonic Design." In Proceedings of the 1981 International Computer Music Conference,

26 Computer Music Journal

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions

Page 22: The Composer and the Computer

ed. H. S. Howe, Jr. San Francisco: Computer Music Association, pp. 2-21.

Machover, T., forthcoming. "Thoughts on Computer Mu- sic Composition." In Computer Music, ed. C. Roads and J. Strawn. Cambridge, Massachusetts: MIT Press.

McAdams, S., and A. Bregman. 1979. "Hearing Musical Streams." Computer Music Journal 3(4):26-43.

Minsky, M. 1981. "Music, Mind, and Meaning." Com- puter Music Journal 5(3).

Moorer et al. 1979. "The 4C Machine." Computer Music Journal 3(3): 16-24.

Plomp, R., and W. Levelt. 1965. "Tonal Consonance and Critical Bandwidth." Journal of the Acoustical Society of America 38:548.

Roads, C. 1978. "The UNESCO Workshop on Computer Music at Aarhus, Denmark." Computer Music Journal 2(3):30-32. Reprinted in M. Battier and B. Traux, eds. 1980. Computer Music/Composition Musicale par Ordinateur. Ottawa: Canadian Commission for UNESCO, pp. xx-xxvii.

Roads 27

This content downloaded from 146.155.94.33 on Sun, 24 Nov 2013 20:45:42 PMAll use subject to JSTOR Terms and Conditions