Computer-Aided Music Composition With LSTM Neural Network and Chaotic Inspiration - Original

Embed Size (px)

Citation preview

  • Computer-Aided Music Composition with LSTM Neural Networkand Chaotic Inspiration

    Andres E. Coca1, Debora C. Correa2 and Liang Zhao1

    Abstract In this paper a new neural network system forcomposition of melodies is proposed. The Long Short-TermMemory (LSTM) neural network is adopted as the neuralnetwork model. We include an independent melody as anadditional input in order to provide an inspiration source tothe network. This melody is given by a chaotic compositionalgorithm and works as an inspiration to the network enhancingthe subjective measure of the composed melodies. As the chaoticsystem we use the Henon map with two variables, which aremapped to pitch and rhythm. We adopt a measure to conductthe degree of melodiousness (Eulers gradus suavitatis) of theoutput melody, which is compared with a reference value.Varying a specific parameter of the chaotic system, we cancontrol the complexity of the chaotic melody. The system runsuntil the degree of melodiousness falls within a predeterminedrange.

    I. INTRODUCTION

    T he reproduction and generation of music has attractedresearches a long time ago. Music composition bycomputers, specially, date back to the 1950s with the useof Markov chains to generate melodies. Since then, severalstudies have been conducted in order to achieve a compu-tational music composition system that could, as much aspossible, catch the human mind, skills, and creativity [1].

    For musical computation, artificial neural networks(ANNs), as well as other systems that involve machinelearning, are able to learn patterns and features present inthe melodies of the training set and to generalize them tocompose new melodies. In this context, we can find manyapproaches that use ANNs in music learning and composition[2]. Among different ANNs, the recurrent neural networks(RNNs) are particularly considered for music computation,since they contemplate feedbacks units and delay operators,which allow the incorporation of nonlinearity and dynam-ical aspects to the model. This makes the RNNs speciallyinteresting mechanisms in the analysis of temporal timeseries. RNNs are usually trained with the Back-Propagation-Through-Time algorithm (BPTT) that implements a gradient-descent based learning method. However, previous gradientmethods share a problem: over time, as gradient informationis passed backward to update weights whose values affectlater outputs, the error/gradient information is continuallydecreased or increased by weight-update scalar values [3].This means that the temporal evolution of the path integralover all error signals flowing back in time exponentially

    1 Andres E. Coca and Liang Zhao are with the Institute of Mathematicsand Computer Science, University of Sao Paulo, Sao Carlos - SP, Brazil,email:{aecocas,zhao}@icmc.usp.br.

    2 Debora C. Correa is with the Institute of Physics, University of SaoPaulo, Sao Carlos - SP, Brazil, email:{debcris.cor}@gmail.com.

    depends on the magnitude of the weights. For this reason,standard recurrent neural networks fail to learn long timelags between relevant input and target events [4].

    To overcome this drawback, the Long-Short-Term Memory(LSTM) network were designed [5]. The LSTM networkis an alternative architecture for recurrent neural networkinspired on the human memory systems. The principal moti-vation is to solve the vanishing-gradient problem by enforc-ing constant error flow through constant error carrousels(CECs) within special units, permitting the non-decayingerror flow back into time [5]. The CECs are units responsiblefor keeping the error signal. This enables the network to learnimportant data and store them without degradation of longperiod of time.

    Studies on musical composition using LSTM found inthe literature started in 2002 with the paper of Eck andSchmidhuber [6]. The authors used the LSTM to learnmusical forms of blues. In a second stage, the network isconducted to compose new melodies on blues style withoutlosing the relevant structures over long time. Two additionalexperiments are further realized: in the first one, the networklearns a sequence of chords and, in the second, it learns toplay the chords along with the melody [7]. In [8] a computerprogram named ALICE used the LSTM network to composemusic based on selection of pieces.

    Recently, the LSTM network was used for music compo-sition in [9], where the authors compare the performance ofMLPs trained with the BPTT algorithm and LSTM networksin the task of composing new melodies. They used a nature-based inspiration to complement the application stage, inorder to enhance the variability of the composed melodies.However, this method has some disadvantages: it requires thecreation and storage of images in a dataset; and it requiresthe preprocessing of each image, with is time consuming.A similar works is in [10], where the RNN is trained withthe BPTT algorithm and as the inspiration is provided by achaotic dynamical system.

    In the context of music composition, it is usual andimportant to determine the quality of the final melody. Inother words, it might be interesting to tune the characteristicsof the composed melody with respect to some set of criteriathat evaluate the melody as good. However, previousworks quantifying the subjective characteristics of the createdmelodies are not known beforehand. In this manner, the aimof this paper is the proposal of a LSTM neural networkbased system for computer-aided musical composition inwhich the quality of the composed melody is evaluated. Theinspiration source comes from a chaotic system. Due to the

    Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 2013

    978-1-4673-6129-3/13/$31.00 2013 IEEE 270

  • characteristics of chaotic systems, it is possible to get infinitevariations through arbitrarily small changes in the parametersof the system. As consequence, we can get many melodieswith different characteristics and complexity. For establishingthe chaotic melody, we use the algorithm proposed by Cocaet. al. [11], and the Henon map that has two variables:the first variable is used for determining the pitch, and,the second, the rhythm. A specific parameter of the chaoticsystem is varied, making it possible to control the complexityof the chaotic melody. The musical composition system isable to compose new melodies with a degree of melodiounesspredefined. This subjective property is obtained for eachoutput melody and compared with a reference value. Thedifference between the output and the reference value isused to modify a parameter of the dynamical system. Bydoing so, we can define a strategy to control the pitch-relatedcomplexity of the inspiration melody. The system runs untilthe value of the characteristic falls within a range of thereference value. With this the characteristic of the outputmelody is controlled by the chaotic inspiration source.

    The overall organization of the paper is as follows: Sec-tion II describes the LSTM network; Section III presentsthe algorithm of musical composition; Sections IV and Vshow the full compositional system and the melodic measuresused; simulation results are shown in Section VI. Finally,Section VII presents the principal conclusions and possibil-ities of future works.

    II. THE TRADITIONAL LSTM NETWORKTraditional RNNs are usually characterized as networks in

    which the outputs of the hidden units at time are feedbackto the same units at time +1 (Fig. 1(a)). However, as men-tioned in Introduction, they can suffer from the vanishing-gradient problem [4]. The LSTM network was developed tominimize this drawback by offering the memory block: thebasic unit of the hidden layer (Fig. 1(b)).

    A memory block includes one or various memory cells,a pair of gating units and output units (according to thequantity of memory cells). Figure 2 details a memory blockwith one memory cell. We can also observe a self-connectedlinear unit called Constant Error Carrousel (CECs) in eachmemory cell. This unit is fundamental to pass the errorsignals through the networks, minimizing the vanishing errorproblem. More details about the relationship between theCECs and the error/gradient signals can be verified in [5].A. Forward Pass

    The forward pass of the LSTM network is quite similarto the equivalent step in traditional RNNs, except that nowthe hidden unit comprises sub-units in which the signal ispropagated. Due to space limitation, we will briefly describethe forward pass. The reader is invited to consider refer-ence [4] and [5] for more details about the LSTM forwardand backward algorithm.

    Consider again Figure 2. Basically, the cell state is up-dated according to its current state and the three connections:, that reflects the signals from the input layer; and

    (a)

    (b)

    Fig. 1. ANNs architectures. (a) The RNN network with one recurrenthidden layer. (b) The LSTM network with memory blocks in the hiddenlayer (only one is shown).

    Fig. 2. The LSTM memory block with one memory cell.

    271

  • , that represent the connections from input and outputgates respectively.

    As usual, a time step involves updating all units andcomputing the error signals in order to adjust the weights.The activation of the input gate () and the outputgate () is generally defined by sigmoid function ona weighted sum of the inputs () and (), bothreceived via recurrent links from the memory blocks andfrom the external inputs to the network:

    () =

    ( 1) + (1)

    and

    () = (

    ()) (2)

    Accordingly:

    () =

    ( 1) + (3)

    and

    () = (

    ()) (4)

    where and are the unit weights for the gates.The memory blocks are indexed by ; memory cells in

    block by ; and is the weight of the connection fromunit to unit .

    The gates also use logistic sigmoid function () (gener-ally with range [0,1]) as follows:

    () = 11+ (5)The inputs of are multiplied by weights (from

    input to cell ) as follows:

    () =

    ( 1) (6)

    Following the sequence, a sigmoid function () rangingfrom -2 to 2 is applied in ():

    () = 41+ 2 (7)Consequently, the internal state of memory cell is:

    (0) = 0; () = ( 1) + ()( ()) (8)for > 0. Finally, the cell output is:

    = ()( ()) (9)

    where () is usually a sigmoid function ranging from[1, 1]:

    () = 21+ 1 (10)Now consider a network with a standard input layer, a

    hidden layer consisting of memory blocks, and a standard

    Fig. 3. The LSTM network witch chaotic inspiration.

    output layer. The typical networks output () is theweighted sums of () that is propagated through asigmoid function () (Eq.(5)) as follows:

    () =

    ( 1) (11)

    () = (()) (12)where represents all units feeding the output units (gen-erally, it comprises all the cells in the hidden units and theinput units).

    B. The LSTM network with chaotic inspirationWe can adapt the input layer of the LSTM network

    dividing the input units in two groups: the melodic units,and the chaotic units. The melodic units represent the melodysupposed to be learned, and the chaotic units represent theinspiration given by the chaotic system.

    Figure 3 shows the resulting LSTM recurrent networkwith two memory blocks. The melodic input units arerepresented by , and accounts for the inputs for thechaotic inspiration generated by the algorithm of compositiondescribed in Section III. The network outputs are representedby . The cycles of thirds representation is used as the inputdata, established in the form of seven bits [12]. Each hiddenunit represents a memory block with one memory cell asillustrated in Figure 2.

    III. CHAOTIC INSPIRATION ALGORITHMThe algorithm developed in [11] uses the numerical solu-

    tion of a nonlinear dynamical system. In this paper dynamicalsystem with one and two variables are used. The first variable() is associated with the extraction of musical pitches andthe second variable () with the rhythmic durations for theinput inspiration. Data transformations of this variable aredescribed in the following subsections.

    272

  • A. Extraction of Frequencies and Musical NotesThe extraction of frequencies and musical notes is divided

    into three steps as follows:1) Part 1: Generation of the Musical Scale: With the

    tuning factor (the inverse of the number of tone divisions, = 1/) and the number of octaves , a vector of di-mension (+1) can be constructed. It contains the frequencyratios of an equal temperament scale and is generated by thefollowing geometric sequence:

    = 2(1)/6, 0 < + 1 (13)where = 6/ is the total number of notes of a

    chromatic scale with tuning factor and octaves. Withthe number of semitones (), tones () and tone and a half() that form the architecture of the desired musical scale, the components of the binary membership vector v of thedesired scale are determined, in which an element 1 or 0indicates, respectively, whether or not a note of desired scaleis contained in the chromatic scale. The membership vectorof a scale v acts as a filter on the vector s, keeping only theintervals that belong to the scale. The result of this operationis the vector = , 0 < (+ 1) . The elementsin the vector e equal to zero are eliminated. Thus, we obtaina vector g with elements.

    2) Part 2: Variable Normalization: The variable ()should be normalized with respect to the vector g, becausethe range of the variable () is different from the rangeof the data vector g. The normalization process, defined as = ( () ,g), consists of scaling and translating thenumerical solution of variable () in order to adapt it tothe data vector g. In this way, we obtain the normalized equal to () = () + , where is a scaling factor,calculated by the following formula:

    =2 1

    max (())min (()) (14)

    where indicates the extent of the scale in octaves (notethat max (g) = 2 and min (g) = 1). Similarly, the variable is a translation factor and it is determined by the followingequation:

    = min (()) + min (g) = min (()) + 1 (15)In this way, we get a variable in the range 1 () 2.3) Part 3: Mapping to the Closest Value: Once the vari-

    able is normalized, we determine the closest value in thevector g for each value , getting a match between thedata vector with the notes of the musical scale specifiedin . Then a matrix D of dimension is built withEq. (16), such that represents the number of elementsof a piece of numerical solution of and is the numberof musical notes and the indexes and are in the ranges0 < < and 0 < , respectively.

    , =

    {0, if

    () (), if

    () > (16)

    where the threshold value , calculated by = 26 1 (with a tuning factor = 0.5), is equal to the minimalinterval generator. Then we generate a new vector h of size 1 that holds the position of the minimum values foreach row of the matrix D. The knowledge of tonic is requiredfor the conversion of the variable () to the musical space.The tonic frequency , with the musical tone in the octave can be obtained as follows:

    , = 55 2+1210

    12 (17)

    With the indexes of h and frequency of tonic ,, we cancalculate the frequencies of musical notes corresponding tothe variable () equal to = , , 0 < . Inorder to view the score of the melody, these frequencies mustbe converted to the values of musical notes in the standardMIDI format (Musical Instrument Digital Interface). For thispurpose, we use Eq. (18), where is the frequency in Hz,and is the corresponding MIDI value.

    = 69 +12

    log 2log

    (440

    )(18)

    B. Variable for RhythmHere we wish to relate the variable () of the nonlinear

    dynamical system to rhythmic values in units of time byusing the normalization process = ( () ,d). This nor-malization is made with respect to a vector d of size (1 7)that contains the appropriate numeric values representing therhythmic notes (a integer number between 6 and 0 is usedto represent the interval from the whole note to the sixty-fourth note). In this step, the application of the mapping isunnecessary due to the relationship between the normalized and the rhythmic values, applying the round function. Thevectors y and x are used to generate the matrix of notes Mand with it the .mid file.

    The whole process to generate the melody by using achaotic system is summarized in Fig. 4. It should be notedthat only the first and second part of the diagram are usedin this paper, which includes all the steps on the left sideand center side of the diagram, but not including the stepsGenerate Matrix of Notesand Write to the MIDI file.

    IV. MUSICAL COMPOSITION SYSTEM WITH LSTM,CHAOTIC INSPIRATION AND CONTROL STRATEGY

    The framework proposed here for music composition hasthe flexibility to adjust a subjective characteristic of themelody (in this case, the degree of melodiousness), accordingto a predefined reference value. It is based on the idea that aneural network can learn, in general terms, the characteristicsof a melody and these can be complemented by an additionalmelody named inspiration source. In [10] is shown thatcontrolling the number of notes of the inspiration melody it ispossible to vary the pitch-related complexity, the similarity ofpitch and rhythm and the degree of originality of the outputmelody.

    273

  • Fig. 4. Diagram of chaotic algorithm [11].

    Figure 5 shows the block diagram of the compositionalsystem: is the reference value of the degree of melodious-ness; is the melodic measure of the output melody; isthe output of the proportional controller, which adjusts theparameter of the chaotic system; C is a matrix of solutionvariables of the chaotic attractor; M(1) is the melodic matrixof the melody composed by the chaotic musical compositionalgorithm; M(2) is the melodic matrix of the input melody;and M(3) is the melodic matrix of the melody composed byLSTM recurrent neural network.

    Fig. 5. Diagram of melodic generation system.

    The variable error is equal to = . This differenceis used in the proportional controller in order to obtain anaction , such that = = ( ), which changesthe parameter of the chaotic system. Therefore the input ofthe chaotic system is:

    C = (0, 0, + ) (19)where 0 and 0 are the initial conditions of the chaotic

    system and its parameter. The solution of the systemis stored in the matrix C2 ( is number of iterations).The solution variables are separated, i.e., () = C1 and () = C2. Each solution variable of the chaotic system is

    mapped to a musical aspect, such as pitch and rhythm ofthe resulting melody. In this case, () is used to create thepitches and () the rhythmic values. Let () be the chaoticcomposition algorithm. Then, taking as input the chaoticvectors () and () and the set of musical properties(like scales, number of octaves, initial octave, and so soon)represented with , the output of chaotic composition blockis the melodic matrix M(1):

    M(1) = ( () , () , ) (20)With the melodic matrix of chaotic melody M(1) the

    vectors x(1) = M(1)1 and y(1) = M(1)2 are obtained, which

    are used as inspiration chaotic inputs that together with thepitches and rhythmic vector of the input melody x(2) and y(2)are used with inputs for the LSTM recurrent neural network,obtaining the melodic matrix of the output melody M(3), asshown in the expression (21):

    M(3) = (x(1),y(1),x(2),y(2),

    )(21)

    where represents the LSTM network and representsthe set of input parameters of the network. Finally, let () bea function that returns the value of degree of melodiousnessof the output melody. Thus =

    (M(3)

    ), which is the

    value that will be feedback to the input.

    V. MELODIC MEASURESIn this section, the subjective measures to quantify the

    characteristic of the melody composed by the network andby the chaotic system are described. We adopted the melodiccomplexity in order to find the relation between one of theparameters of the chaotic system and the generated melody.To quantify the melody generated by the neural network, weuse the degree of melodiousness as the target measure.

    A. Expectancy-based model of melodic complexityThis model is based on the melodic complexity ex-

    pectancy, since the components of the model are derived fromthe melodic expectancy theory. The melodic complexity canbe tonal (pitch-related complexity), rhythmic (rhythm-relatedcomplexity) or joint (combination of pitch and rhythm-related), where high values indicate high complexity [13].B. Degree of melodiousness

    According to L. Euler (1707-1783), the degree of melo-diousness is related to the complexity of mental calculationperformed by a listener which is inversely proportional tohis pleasant experience [14]. For each interval ( [1,])of the melody ( is the total number of intervals), a newvalue is calculated like = () (), where ()() isthe frequency ratio of the current interval. In other words,() and () are the nominator and the denominator of thefrequency ratio of the th interval, respectively. Thus, canbe decomposed into products of powers of different primenumbers. In this sense, = 11 22 . . . , where represents the jth prime number, is the number ofappearances of the jth prime number, and is the largest

    274

  • prime number in . The degree of melodiousness of theinterval value and the degree of melodiousness of thegiven melody, which is defined as the mean value of (),are defined by Eq.(22) and (23), respectively.

    () =

    =1

    ( ) + 1 (22)

    =1

    =1

    () (23)

    The degree of melodiousness is low if the decompositioncontains number primes with low values. On the other hand,it is high if it has number primes with high values and/or ithas a lot of prime numbers.

    VI. EXPERIMENTAL RESULTSIn this section, we describe the experiments and the results.

    Our main objective is to automatically compose a melodywith a degree of melodiouness predefined. Two LSTM net-works, each containing two memory blocks with one memorycell, are used to train pitch and rhythm individually. Thenetwork for pitch has 16 inputs and 9 outputs. The first 9input units correspond to the notes of the input melody, whichis represented with 7 bits for pitch and 2 bits for the octave.The remaining 7 input units correspond to the chaotic notes,in the 7-bit representation as well. The network for rhythmhas 14 inputs, 7 units for the note of each melody. Bothnetworks were trained with a learning rate equal to 0.1. Itwas used a discrete chaotic system with two variables, theHenon map, which is described by the system of equations(24).

    +1 = +1 2+1 =

    (24)

    where and are the variables of the system, and the parameters, and the number of iterations. Figure6 illustrates the bifurcation diagram of Henon map whenvarying the parameter between 0 and 1.4 and keeping fixed at 0.3.

    Fig. 6. Bifurcation diagram of Henon map.

    Initially, it is necessary to determine the regions wherethe pitch-related complexity of the melodies generated byHenon map is ascendant. Figure 7 shows the pitch-related

    complexity for chaotic melodies generated by Henon mapvarying the parameter between 0 and 1.4 and with fixedat 0.3. Figure 8 presents the ascendant region (between 0 and0.34) to be used in the full system.

    Fig. 7. Pitch-related complexity for melodies generated with Henon map.

    Fig. 8. Ascendant region of pitch-related complexity.

    Subsequently, the input melody is chosen. Figure 9 showsthe first measures of the piano sonata No. 16 in C majorby Mozart. This melody is further used to train the LSTMnetwork.

    Fig. 9. Measures of the Sonata No. 16 in C major K. 545 by W.A. Mozart.

    Figure 10 shows the degree of melodiouness of themelodies composed by LSTM network when varying theparameter of Henon map between 0 and 0.4 and withparameter fixed at 0.3. The dotted line is the degree ofmelodiouness of the input melody and the solid line isthe degree of melodiouness of each one of the composedmelodies using different value of parameter . We canobserve small regions where the parameter is proportionalto the melodiouness.

    The region between 0.1 and 0.2 covers almost the entirerange of melodiouness, i.e., in range of 8 degrees (of 1through 9). Now the proportional controller is applied with avalue of = 0.01 and the initial condition for the parameter is set equal to 0.1. The compositional system is executed

    275

  • Fig. 10. Degree of melodiouness with parameter of Henon map.

    to compose a melody with a degree of melodiouness degreeequal to 5 with a margin of error of 5%. Figure 11 showsthe evolution of parameter , which reaches the value 5 after672 iterations.

    Fig. 11. Evolution of parameter until it reaches the predetermined valueof melodiouness (5).

    The melody shown in Fig. 12 was composed by usingthe full compositional system, this melody has a degree ofmelodiouness equal to 5.

    Fig. 12. Composed melody with a degree of melodiouness equal to 5.

    VII. CONCLUSIONSIn this paper, a LSTM recurrent neural network is used

    in the context of melody generation with a degree of melo-diouness predefined. The LSTM network was modified withan additional input entry that receives a chaotic melody ofinspiration. The chaotic inspiration is generated by a chaoticdynamical system through a special algorithm of composi-tion. Taking advantage of the sensitivity to initial conditionsand to parameter configuration of the chaotic dynamicalsystems, several melodies with different complexity degreescan be used as inspiration source. It was verified that thecomplexity of chaotic melody may be modified by smallchanges in a parameter of the chaotic system. In this workwe have explored the two-dimensional Henon map, wherethe first variable was used to generate the pitches and the

    second the rhythmic durations. The melody composed bythe network was quantified using a measure that reflectsthe degree of melodiouness. We found small regions wherethere is some degree of proportional relationship betweenthese two variables. With this information, we can use acontrol strategy that allows the modification of the parameter,aiming at obtaining a melody with a degree of melodiounessthat is close to the predefined reference value. These arethe first efforts to prove that it is possible to automaticallycontrol the subjective characteristics of a melody. However,the system requires several empirical methods and the resultsare approximate. As future works, we intend to use chaoticsystems with three dimensions, with the third variable beencapable to generate musical dynamics. It is also proposedto use different types of melodies and different types ofchaotic dynamical systems such as fractals and oscillators.In addition, we would like to used other control strategiessuch as the proportional-integral-derivative controller (PIDcontroller), or, due to the behavior of chaotic systems, anonlinear control strategy as method OGY, Method of FixedPoint Induction Control (FPIC) or Time-Delayed Feedback(TDAS). Finally, we will also compare the results obtainedwith LSTM and BPTT, as well as other types of networks.

    REFERENCES[1] R. Moore, Elements of Computer Music. Prentice Hall, 1998.[2] C. Chen and R. Miikkulainen, Creating Melodies with Evolving

    Recurrent Neural Networks, Proceedings of the Internacional JointConference on Neural Networks (IJCNN01), pp.2241-2246, 2001.

    [3] B. Pearlmutter, Gradient calculations for dynamic recurrent neuralnetworks: A survey, IEEE Transactions on Neural Network, vol. 6,no. 5, pp.1212-1228, 1995.

    [4] S. Hochreiter, Y. Bengio, P. Frasconi and J. Schmidhuber, Gradientflow in recurrent nets: The difficulty of learning long-term dependen-cies, A Field Guide to Dynamical Recurrent Networks, IEEE Press,2001.

    [5] F. Gers, Long Short-Term Memory in Recurrent Neural Networks,Ph.D. thesis, Ecole Polytechnique Federale de Lausanne EPFL, 2001.

    [6] D. Eck and J. Schmidhuber, Finding Temporal Structure in Music:Blues Improvisation with LSTM Recurrent Networks, Proceedings ofthe 2002 IEEE Workshop, Networks for Signal Processing XII, pp.747-756, 2002.

    [7] D. Eck and J. Schmidhuber, Learning the Long-Term Structure of theBlues, Proceedings of the International Conference on Artificial NeuralNetworks, pp.284-289, 2002.

    [8] A. Brandmaier, ALICE: A LSTM-Inspired Composition Experiment,Thesis (Diplomarbeit) of the Computer Science programme of theTechnical University of Munich, 2002.

    [9] D. Correa, J. Saito and S. Abib, Composing Music with BPTT andLSTM Networks: Comparing Learning and Generalization Aspects,Proceedings of the 2008 11th IEEE International Conference on Com-putational Science and Engineering (CSEW08) - Workshops, pp.95-100, 2008.

    [10] A. Coca, F. Roseli and L. Zhao, Generation of composed musicalstructures through recurrent neural networks based on chaotic inspira-tion, Proceedings of the 2011 International Joint Conference on NeuralNetworks (IJCNN11), pp.3220-3226, 2011.

    [11] A. Coca, G. Olivar and L. Zhao, Characterizing Chaotic Melodies inAutomatic Music Composition, CHAOS - An Interdisciplinary Journal,vol. 20, no. 3, pp.033125, 2010.

    [12] J. Franklin, Recurrent Neural Networks for Music Computation,Journal on Computing, vol. 18, no. 3, pp.321-338, 2006.

    [13] A. Eerola, Expectancy-based model of melodic complexity, Proceed-ings of the Sixth International Conference on Music Perception andCognition (ICMPC), 2000.

    [14] L. Euler, Tentamen Novae Theoriae Musicae Ex Certissismis Harmo-niae Principiis Dilucide Expositae. Saint Petersburg Academy, 1739.

    276

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /CreateJDFFile false /Description > /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ > /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /NA /PreserveEditing false /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ]>> setdistillerparams> setpagedevice