103
Real-time Human Interaction with Supervised Learning Algorithms for Music Composition and Performance Rebecca Fiebrink Princeton University / University of Washington

Rebecca Fiebrink Princeton University / University of Washington

  • Upload
    summer

  • View
    75

  • Download
    0

Embed Size (px)

DESCRIPTION

Real-time Human Interaction with Supervised Learning Algorithms for Music Composition and Performance. Rebecca Fiebrink Princeton University / University of Washington. function [ x flag hist dt ] = pagerank(A,optionsu ) [ m n ] = size(A ); if ( m ~= n ) - PowerPoint PPT Presentation

Citation preview

Page 1: Rebecca Fiebrink Princeton University / University of Washington

Real-time Human Interaction with Supervised Learning Algorithms for Music Composition and PerformanceRebecca FiebrinkPrinceton University / University of Washington

Page 2: Rebecca Fiebrink Princeton University / University of Washington

2

Page 3: Rebecca Fiebrink Princeton University / University of Washington

3

Page 4: Rebecca Fiebrink Princeton University / University of Washington

4

Page 5: Rebecca Fiebrink Princeton University / University of Washington

5

function [x flag hist dt] = pagerank(A,optionsu)[m n] = size(A);if (m ~= n) error('pagerank:invalidParameter', 'the matrix A must be square');end; options = struct('tol', 1e-7, 'maxiter', 500, 'v', ones(n,1)./n, … 'c', 0.85, 'verbose', 0, 'alg', 'arnoldi', … 'linsys_solver', @(f,v,tol,its) bicgstab(f,v,tol,its), … 'arnoldi_k', 8, 'approx_bp', 1e-3, 'approx_boundary', inf,… 'approx_subiter', 5);if (nargin > 1) options = merge_structs(optionsu, options);end;if (size(options.v) ~= size(A,1)) error('pagerank:invalidParameter', … 'the vector v must have the same size as A');end;if (~issparse(A)) A = sparse(A);end;% normalize the matrixP = normout(A);switch (options.alg) case 'dense’ [x flag hist dt] = pagerank_dense(P, options); case 'linsys’ [x flag hist dt] = pagerank_linsys(P, options) case 'gs’ [x flag hist dt] = pagerank_gs(P, options); case 'power’ [x flag hist dt] = pagerank_power(P, options); case 'arnoldi’ [x flag hist dt] = pagerank_arnoldi(P, options); case 'approx’ [x flag hist dt] = pagerank_approx(P, options); case 'eval’ [x flag hist dt] = pagerank_eval(P, options); otherwise

error('pagerank:invalidParameter', ...

'invalid computation mode specified.');

end;

function [x flag hist dt] = pagerank(A,optionsu)

Page 6: Rebecca Fiebrink Princeton University / University of Washington

6

Page 7: Rebecca Fiebrink Princeton University / University of Washington

7

Page 8: Rebecca Fiebrink Princeton University / University of Washington

8

useful algorithms

usable interfaces and appropriate interactions

Page 9: Rebecca Fiebrink Princeton University / University of Washington

9

Machine learning

algorithms?

Page 10: Rebecca Fiebrink Princeton University / University of Washington

10

Outline

• Overviews of interactive computer music and machine learning

• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions

Page 11: Rebecca Fiebrink Princeton University / University of Washington

interactive computer music

Page 12: Rebecca Fiebrink Princeton University / University of Washington

12

Interactive computer music

sensed action

interpretation

response (music, visuals, etc.)

computer

human with microphone, sensors, control interface, etc.

audio synthesis or processing,

visuals, etc.

Page 13: Rebecca Fiebrink Princeton University / University of Washington

13

Example 1: Gesture recognition

sensed action

identification

response

computer

Bass drum:

“Gesture 1”

Page 14: Rebecca Fiebrink Princeton University / University of Washington

14

Example 1: Gesture recognition

sensed action

response

computer

Bass drum:

Hi-hat“Gesture 2”

identification

Page 15: Rebecca Fiebrink Princeton University / University of Washington

15

Model of sensed action to meaning

sensed action

response

computer

model

meaning

Page 16: Rebecca Fiebrink Princeton University / University of Washington

16 computer

Example 2: Continuous gesture-to-sound mappings

Page 17: Rebecca Fiebrink Princeton University / University of Washington

17

sensed action

interpretation

sound generation

computer

mapping

human + control interface

Example 2: Continuous gesture-to-sound mappings

Page 18: Rebecca Fiebrink Princeton University / University of Washington

18

A composed system

sensed action

mapping/model/

interpretation

response

mapping/model/

interpretation

Page 19: Rebecca Fiebrink Princeton University / University of Washington

supervised learning

Page 20: Rebecca Fiebrink Princeton University / University of Washington

20

algorithm

trainingdata

Training

Supervised learning

model

inputs

outputs

Page 21: Rebecca Fiebrink Princeton University / University of Washington

21

algorithm

trainingdata

Training

Supervised learning

model

inputs

outputsRunning

“Gesture 1” “Gesture 2” “Gesture 3”

“Gesture 1”

Page 22: Rebecca Fiebrink Princeton University / University of Washington

22

Supervised learning is useful

• Models capture complex relationships from the data. (feasible)

• Models can generalize to new inputs. (accurate)• Supervised learning circumvents the need to

explicitly define mapping functions or models. (efficient)

Has been demonstrated to be useful in musical applications, but no usable, general-purpose tools exist for composers to apply algorithms in their work.

Page 23: Rebecca Fiebrink Princeton University / University of Washington

23

Weka: A model tool

• General-purpose

• GUI-based• Cited 11,705

times!

Page 24: Rebecca Fiebrink Princeton University / University of Washington

24

Criteria for a supervised learning tool for composers

1. General-purpose2. GUI-based3. Runs in real time4. Supports appropriate end-user

interactions with the supervised learning process

Page 25: Rebecca Fiebrink Princeton University / University of Washington

25

Appropriate interactions

algorithm

trainingdata

Training

model

inputs

outputsRunning

“Gesture 1” “Gesture 2” “Gesture 3”

“Gesture 1”

Page 26: Rebecca Fiebrink Princeton University / University of Washington

26

Appropriate interactions

algorithm

trainingdata

Training

model

inputs

outputsRunning “Gesture 1”

“Gesture 1” “Gesture 2”

creating training data

Page 27: Rebecca Fiebrink Princeton University / University of Washington

27

Appropriate interactions

algorithm

trainingdata

Training

inputs

outputsRunning

“Gesture 1” “Gesture 2”

model

“Gesture 1”

creating training data…evaluating the trained model

Page 28: Rebecca Fiebrink Princeton University / University of Washington

28

Appropriate interactions

algorithm

trainingdata

Training

model

inputs

outputsRunning “Gesture 1”

“Gesture 1” “Gesture 2” “Gesture 3”

creating training dataevaluating the trained model…

modifying training data (and repeating)

Page 29: Rebecca Fiebrink Princeton University / University of Washington

29

Interactive machine learning (IML)

• Training set editing for computer vision systems: Fails and Olsen 2003

• Application to other domains e.g. Shilman et al. 2006; Fogarty et al. 2008; Amershi et al. 2009; Baker et al. 2009

• Other types of interactionse.g., Talbot et al. 2009; Kapoor et al. 2010

Page 30: Rebecca Fiebrink Princeton University / University of Washington

30

Research questions for end-user IML

•Which interactions are possible and useful?•What are the practical benefits and challenges of incorporating end-user interaction in applied machine learning?

•How can IML be useful in real-time and creative contexts?

Page 31: Rebecca Fiebrink Princeton University / University of Washington

31

Outline

• Overviews of interactive computer music and machine learning

• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions

Page 32: Rebecca Fiebrink Princeton University / University of Washington

32

The Wekinator

• Built on Weka API• Downloadable: http://code.google.com/p/wekinator/

1. General-purpose2. GUI-based3. Runs in real time4. Supports appropriate end-user interactions

with the supervised learning process

Page 33: Rebecca Fiebrink Princeton University / University of Washington

33

Running models in real-time

model(s)

.01, .59, .03, ....01, .59, .03, ....01, .59, .03, ....01, .59, .03, ...

5, .01, 22.7, …5, .01, 22.7, …5, .01, 22.7, …5, .01, 22.7, …

time

time

Feature extractor(s)

Parameterizable process

Page 34: Rebecca Fiebrink Princeton University / University of Washington

34

Interactive data creation and model evaluation

“Gesture 1”

“Gesture 2”

“Gesture 3”

model

“Gesture 1”

trainingdata

Page 35: Rebecca Fiebrink Princeton University / University of Washington

35

Real-time, iterative design

Page 36: Rebecca Fiebrink Princeton University / University of Washington

36

3.3098 Class24

Under the hood

Model1 Model2 ModelM

Feature1 Feature2 Feature3 FeatureN…

Parameter1 Parameter2 ParameterM

joystick_x joystick_y

pitchvolume

webcam_1

Page 37: Rebecca Fiebrink Princeton University / University of Washington

37

3.3098 Class24

Under the hood

Model1 Model2 ModelM

Feature1 Feature2 Feature3 FeatureN…

Parameter1 Parameter2 ParameterM

Learning algorithms:Classification:

AdaBoost.M1J48 Decision TreeSupport vector machineK-nearest neighbor

Regression:MultilayerPerceptron

Page 38: Rebecca Fiebrink Princeton University / University of Washington

38

Tailored but not limited to music

The Wekinator• Built-in feature extractors for music & gesture• ChucK API for feature extractors and synthesis

classes

Other modules for sound synthesis,

animation, …?

Other feature extraction modules

Open Sound Control (UDP)

Page 39: Rebecca Fiebrink Princeton University / University of Washington

39

Outline

• Overviews of interactive computer music and machine learning

• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions

Page 40: Rebecca Fiebrink Princeton University / University of Washington

40

Outline

• Overviews of interactive computer music and machine learning

• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions

Page 41: Rebecca Fiebrink Princeton University / University of Washington

41

Study 1: Participatory design process with composers

• Process:– 7 composers– 10 weeks, 3 hours / week– Discussion and brainstorming at

each meeting– Final questionnaire

• Outcomes:– Focus on instrument-building– 2 publicly-performed compositions– Much-improved software and lots of feedback

Page 42: Rebecca Fiebrink Princeton University / University of Washington

42

Study 2: Teaching interactive systems building in an undergraduate course

• Princeton Laptop Orchestra (PLOrk)• Midterm assignment– Students built 1 continuous

+ 1 discrete system– Logging + short answer questions

• Outcomes:– Successful project completion– Used in midterm and final performances– Logs from 21 students

Page 43: Rebecca Fiebrink Princeton University / University of Washington

43

Study 3: Bow gesture recognition

• Case study with a composer/cellist• Task: Classify 8 types of standard bow gestures

e.g., up/down bow (2 classes), articulation (7 classes)

43

Page 44: Rebecca Fiebrink Princeton University / University of Washington

44

Study 3: Bow gesture recognition

• Case study with a composer/cellist• Task: Classify 8 types of standard bow gestures

e.g., up/down bow (2 classes), articulation (7 classes)• Method:

– Tasks defined and directed by cellist– Logging, observations, final questionnaire– Cellist assigned each iteration’s classifier a quality rating

(1 to 10)

• Successful classifiers created for all 8 tasks (rated “9” or “10”)

44

Page 45: Rebecca Fiebrink Princeton University / University of Washington

45

Study 4: Composer case studies

• Clapping Music Machine Variations (CMMV) by Dan Trueman, faculty

Page 46: Rebecca Fiebrink Princeton University / University of Washington

46

Study 4: Composer case studies

• CMMV by Dan Trueman, faculty• The Gentle Senses / MARtLET by Michelle Nagai, graduate student

Page 47: Rebecca Fiebrink Princeton University / University of Washington

47

Study 4: Composer case studies

• CMMV by Dan Trueman, faculty• The Gentle Senses / MARtLET by Michelle Nagai, graduate student• G by Raymond Weitekamp, undergraduate

Page 48: Rebecca Fiebrink Princeton University / University of Washington

48

Outline

• Overviews of interactive computer music and machine learning

• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions

Page 49: Rebecca Fiebrink Princeton University / University of Washington

Discussion of Findings

1. Users took advantage of interaction in their work with the Wekinator.

2. Users employed a variety of model evaluation criteria, and subjective evaluation did not always correlate with cross-validation accuracy.

3. Feedback from the Wekinator influenced users’ actions and goals.

4. The Wekinator was a useful and usable tool.5. Interactive supervised learning can be a tool for

supporting creativity and embodiment.

Page 50: Rebecca Fiebrink Princeton University / University of Washington

50

An iterative approach to model-building

Page 51: Rebecca Fiebrink Princeton University / University of Washington

51

An iterative approach to model-building

(mean per student for each task)

(mean per classification task)

PLOrk con-tinuous

PLOrk discrete

KBow 1st session

KBow 2nd session

0

2

4

6

Mean # Trainings

Page 52: Rebecca Fiebrink Princeton University / University of Washington

52

Frequent modifications to training data in-between re-trainings

Add Data Edit Data Delete Data Clear All Data

Change Learner

Change Learn

Params

Change Features

0

0.5

1

1.5

2

2.5

3

KB

owpe

r-ta

sk a

vera

ge

Add Data

Edit Data

Delete

Data

Clear A

ll Data

Chang

e Learn

er

Chang

e Learn

er P

arams

Change

Fea

tures

0

2

4

6

Cont.Disc.P

LOrk

per-

stud

ent a

vera

ge

Page 53: Rebecca Fiebrink Princeton University / University of Washington

53

Interaction and the training dataset

• Training data is an interface for key tasks:– defining the learning problem– clarifying the learning problem to fix errors– communicating changes in problem over time– providing a “sketch” that the computer fills in

Page 54: Rebecca Fiebrink Princeton University / University of Washington

54

Interaction and the training dataset

• Training data is the most appropriate interface for:– defining the learning problem– clarifying the learning problem by fixing errors– communicating changes in problem over time– providing a “sketch” that the computer fills in

Page 55: Rebecca Fiebrink Princeton University / University of Washington

55

Playalong data recording

• Allowed training data to represent more fine-grained information

• Enabled composers to engage their musical and physical expertise– Allowed practice and attention to “feel”

Page 56: Rebecca Fiebrink Princeton University / University of Washington

56

“Conventional” model evaluation

model

Available data

Training set

Evaluation set

Train

Evaluate

Cross-validation: repeat with different data partitions.

Page 57: Rebecca Fiebrink Princeton University / University of Washington

57

“Direct” evaluation in Wekinator

model

Training set Train

Evaluate

Page 58: Rebecca Fiebrink Princeton University / University of Washington

58

Direct evaluation used most frequently

• Composers in participatory design and case studies: only direct evaluation

• KBow and PLOrk:

Cross-val. Acc.

Train. Acc. Direct Eval.0

1

2

3

4

5

6

PLOrk cont.PLOrk disc.KBow 1KBow 2

Mea

n #

times

act

ion

take

n

Page 59: Rebecca Fiebrink Princeton University / University of Washington

59

Roles of cross-validation and training accuracy

• K-bow: Cross-validation used to quickly and objectively compare different feature selections and learning algorithms

• PLOrk:– Treated as reliable evidence a model was performing

well– Used to validate the user’s own ability

Page 60: Rebecca Fiebrink Princeton University / University of Washington

60

Roles of direct evaluation

• Used to assess behavior of the model against subjective criteria

• Used to obtain feedback that shapes the users’ future interactions with the system

Page 61: Rebecca Fiebrink Princeton University / University of Washington

Discussion of Findings

1. Users took advantage of interaction in their work with the Wekinator.

2. Users employed a variety of model evaluation criteria, and subjective evaluation did not always correlate with cross-validation accuracy.

3. Feedback from the Wekinator influenced users’ actions and goals.

4. The Wekinator was a useful and usable tool.5. Interactive supervised learning can be a tool for

supporting creativity and embodiment.

Page 62: Rebecca Fiebrink Princeton University / University of Washington

62

Subjective assessment of accuracy

• Important for gesture classifiers– Accuracy = model outputs are correct according to

learning concept definition

• Still important for open-ended instrument-building (continuous) tasks– Accuracy = matching a user’s expectations, especially on

inputs like the training examples

Page 63: Rebecca Fiebrink Princeton University / University of Washington

63

Other evaluation criteria

•Discrete classifiers:– Cost: consequences and locations of model errors– Decision boundary smoothness

Page 64: Rebecca Fiebrink Princeton University / University of Washington

64

Other evaluation criteria

•Discrete classifiers:– Cost: consequences and locations of model errors– Decision boundary smoothness

•Continous mappings:– Complexity, difficulty these are good!– Unexpectedness and surprise– “Feel”

Page 65: Rebecca Fiebrink Princeton University / University of Washington

65

Subjective evaluation criteria & CV

Page 66: Rebecca Fiebrink Princeton University / University of Washington

66

Subjective evaluation criteria & CV

• K-bow:– Cross-validation sometimes correlates with subjective

quality, but sometimes it doesn’t!

Task: Horizontal Position

Vertical Position

Bow Direction

On/Off String

Speed Articulation

R: -0.59 -0.44 -0.74 -0.50 0.65 0.93

Pearson’s correlation for tasks with > 3 iterations:

Page 67: Rebecca Fiebrink Princeton University / University of Washington

67

Subjective evaluation criteria & CV

• K-bow:– Cross-validation sometimes correlates with subjective

quality, but sometimes it doesn’t!

Task: Horizontal Position

Vertical Position

Bow Direction

On/Off String

Speed Articulation

R: -0.59 -0.44 -0.74 -0.50 0.65 0.93

Pearson’s correlation for tasks with > 3 iterations:

Page 68: Rebecca Fiebrink Princeton University / University of Washington

68

Subjective evaluation criteria & CV

• K-bow:– Cross-validation sometimes correlates with subjective

quality, but sometimes it doesn’t!

Task: Horizontal Position

Vertical Position

Bow Direction

On/Off String

Speed Articulation

R: -0.59 -0.44 -0.74 -0.50 0.65 0.93

Pearson’s correlation for tasks with > 3 iterations:

Page 69: Rebecca Fiebrink Princeton University / University of Washington

69

Thoughts: Is generalization accuracy important?

• Yes!– Human and environmental variations are inevitable

• …BUT it may not be the only or most important factor• Generalization estimated from the training set (e.g.,

using cross-validation) is not always informative• Implies that models designed for human use should

be evaluated by human use.

Page 70: Rebecca Fiebrink Princeton University / University of Washington

70

What should be the goal of the learning algorithm?• Most algorithms’ training process aims for a model with

good generalization (sometimes appropriate)• BUT the user is also employing the training data as an

interface (not representative of future inputs)• Better algorithms might– Optimize other criteria important to the user– Privilege training accuracy (e.g., k-nearest neighbor)– Provide parameters for interactive improvement against other

subjective criteria (e.g., using regularization parameter for boundary smoothness)

Page 71: Rebecca Fiebrink Princeton University / University of Washington

Discussion of Findings

1. Users took advantage of interaction in their work with the Wekinator.

2. Users employed a variety of model evaluation criteria, and subjective evaluation did not always correlate with cross-validation accuracy.

3. Feedback from the Wekinator influenced users’ actions and goals.

4. The Wekinator was a useful and usable tool.5. Interactive supervised learning can be a tool for

supporting creativity and embodiment.

Page 72: Rebecca Fiebrink Princeton University / University of Washington

72

Interaction involves control and feedback

control

feedbackRunning the modelsCross-validation and training accuracy

Machine learning

algorithms

Page 73: Rebecca Fiebrink Princeton University / University of Washington

73

Running models informs future actions

• For example:– locate errors add correctly-labeled examples

– detect total failure delete all the data

model

WRONG LABEL!

“CORRECT LABEL”

new training example

Page 74: Rebecca Fiebrink Princeton University / University of Washington

74

Running models trains users to be more effective supervised learning practitioners

•Users especially learned to create better training datasets– Minimize noise– Balance the number of examples in each class– Vary examples along all the dimensions that might vary

in performance

• Important for novice users

Page 75: Rebecca Fiebrink Princeton University / University of Washington

75

Running models informs users’ goals for machine learning• Users liked being inspired by surprising behaviors of

neural networks• Users learned what was most easily accomplished

… and exploited flexibilities in the learning concept definition to create a model that most easily met their most important goals

• IML allows users to discover how goals might change– and to communicate changes via the training set

Page 76: Rebecca Fiebrink Princeton University / University of Washington

76

Running models teaches users about themselves and their work

K-Bow cellist:Model’s confusion of spiccato and riccocet

realization that her spiccato was too much like riccocet

improved technique

improved classifiers

Page 77: Rebecca Fiebrink Princeton University / University of Washington

Discussion of Findings

1. Users took advantage of interaction in their work with the Wekinator.

2. Users employed a variety of model evaluation criteria, and subjective evaluation did not always correlate with cross-validation accuracy.

3. Feedback from the Wekinator influenced users’ actions and goals.

4. The Wekinator was a useful and usable tool.5. Interactive supervised learning can be a tool for

supporting creativity and embodiment.

Page 78: Rebecca Fiebrink Princeton University / University of Washington

78

Barriers to usability

• Long training time• Algorithms’ inability to model the desired concept

[easily]• Difficulty in debugging – No guidance on choosing a better algorithm or

algorithm parameters– Especially difficult for ML novices

Page 79: Rebecca Fiebrink Princeton University / University of Washington

79

Usability and usefulness: Study 1 composers

Statement 5-point Likert mean (std. dev.)

“The Wekinator allows me to create more expressive mappings than other techniques.”

4.5 (.8)

“The Wekinator allows me to create mappings more easily than other techniques.”

4.7 (.5)

Page 80: Rebecca Fiebrink Princeton University / University of Washington

80

Usability and usefulness: PLOrk students

Statement 5-point Likert mean (std. dev.)

“I can reliably predict what sound my model will make for a given inputgesture.”

4.5 (.7)

“Wekinator eventually learned what I wanted it to.”

4.3 (.9)

“My model provides reliable gesture classifications” (discrete task)

4.9 (.2)

“My model is musically expressive” (continuous task)

4.1 (.7)

Page 81: Rebecca Fiebrink Princeton University / University of Washington

81

Usability and usefulness: PLOrk students

• Building working interactive systems was fast– 27.1 minutes for continuous mapping– 16.1 minutes for discrete classifier

• Students enjoyed the Wekinator– “Learning by experimentation was

a lot of fun!”– “It’s so cool, the Wekinator rocks.”

Page 82: Rebecca Fiebrink Princeton University / University of Washington

82

Usability and usefulness: K-Bow

Task Rating (1 to 10) CV Accuracy (%)Direction 10 87.3On/Off String 10 83.5Grip 10 100.0Roll 10 98.2Horizontal Position 10 89.3Vertical Position 10 90.0Speed 9 87.5Articulation 9 98.8

• Models successfully created for all 8 tasks:

Page 83: Rebecca Fiebrink Princeton University / University of Washington

83

Usability and usefulness: K-Bow

Statement 5-point Likert response

“The Wekinator was able to create accurate bow stroke classiers in our work so far”

4

“The Wekinator was able tocreate bow stroke classiers more easily than other approaches”

“10 (so 5)"

Page 84: Rebecca Fiebrink Princeton University / University of Washington

84

Usability and usefulness: Case studies

create mappings more easily

Series1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Trueman Nagai Weitekamp

Agreement (1-5)

create mappings that were more expressive

create a kind of music thatisn't possible or that is hard to

create using other techniques

approach the process of composition in a new way

The Wekinator allowed me to:

Page 85: Rebecca Fiebrink Princeton University / University of Washington

Discussion of Findings

1. Users took advantage of interaction in their work with the Wekinator.

2. Users employed a variety of model evaluation criteria, and subjective evaluation did not always correlate with cross-validation accuracy.

3. Feedback from the Wekinator influenced users’ actions and goals.

4. The Wekinator was a useful and usable tool.5. Interactive supervised learning can be a tool for

supporting creativity and embodiment.

Page 86: Rebecca Fiebrink Princeton University / University of Washington

86

“There is simply no way I would be able to manually create the mappings that the Wekinator comes up with; being able to playfully explore a space that I've roughly mapped out, but that the Wekinator has provided the detail for, is inspiring.”

Page 87: Rebecca Fiebrink Princeton University / University of Washington

87

“The ability to map sound and gesture, in a very immediate and intuitive (yet unpredictable) way is really the most inspiring and useful aspect of the wekinator for me right now. I can see the possibility of building interfaces or instruments as needed, flexibly, on the fly, for different kinds of projects, and being able to quickly map them out to existing sound sets with only minor programming changes.”

Page 88: Rebecca Fiebrink Princeton University / University of Washington

88

Supporting qualities important to composers• Speed and ease of creating and exploring mappings

(especially complex mappings)– Demonstration can be faster and more efficient than

coding.• Access to surprise and discovery– Neural networks fill in the details of the training data

sketch.• Balancing surprise and complexity with predictability and

control– Users can reliably steer model behavior using the

training data.

Page 89: Rebecca Fiebrink Princeton University / University of Washington

89

Creativity support in HCI

• “Creativity support tool” guidelines proposed by Shneiderman (2000, 2007) and Resnick et al. (2005):– Support exploration, discovery, and sketching– Support diverse users (e.g., novices and experts) and applications– Operate seamlessly with other [composition] tools

• IML is integral to Wekinator’s realization of these guidelines

Page 90: Rebecca Fiebrink Princeton University / University of Washington

90

Embodiment is important

“I have never before been able to work with a musical interface … that allowed me to really ‘feel’ the music as I was playing it and developing it. The Wekinator allowed me to approach composing with electronics and the computer more in the way I might if I was writing a piece for cello, where I would actually sit down with a cello and try things out.”

Page 91: Rebecca Fiebrink Princeton University / University of Washington

91

Embodiment is important

• The Wekinator engaged users’ physical expertise as musicians– And allowed them to create instruments that “felt

right”– Users were physically engaged in the creation of the

data and the evaluation of the models – Playalong interface further supported embodied design

Page 92: Rebecca Fiebrink Princeton University / University of Washington

92

Outline

• Overviews of interactive computer music and machine learning

• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions

Page 93: Rebecca Fiebrink Princeton University / University of Washington

93

IML is feasible and useful in music composition and performance.

“Well, I had basically lost interest in the whole process of digital controller-based instrument building, so the Wekinator's very existence has enabled and inspired me to get back into the game... The Wekinator enables you to focus on what your primary sonic and physical concerns are, and takes away the need to address so many details, and it does so in such a way that even if you DID spend all the time on building the mappings manually, you would *never* come up with what the Wekinator comes up with. So, the process becomes more focused, more musical, more creative, more playful. I actually *want* to do it.”

Page 94: Rebecca Fiebrink Princeton University / University of Washington

94

End-user IML poses distinct requirements and challenges

Supporting machine learning novices

Enabling fast training

Supporting debugging

Exposing meaningful parameters to users

Matching algorithms to users’ goals

Page 95: Rebecca Fiebrink Princeton University / University of Washington

95

Interaction can play many important roles

Interaction with the training data:Engages physical/embodied expertise

Allows changes to the learning problemFixes errors

Allows sketching

Interaction with the trained models:Informs edits to the algorithm & data

Teaches users what an algorithm can learnTeaches how to be a better data providerTeaches users about their own technique

Page 96: Rebecca Fiebrink Princeton University / University of Washington

96

IML can support creativity and embodiment

Supporting exploration, sketching, rapid prototyping

Providing access to surprise and discovery

Supporting diverse users

Supporting many applications

Engaging a high-level approach to design

Page 97: Rebecca Fiebrink Princeton University / University of Washington

97

Final Conclusions

• IML has the potential to significantly improve the usability and usefulness of conventional learning algorithms, and to enable application to new problems by new users.

• Applied machine learning is an HCI problem.

Page 98: Rebecca Fiebrink Princeton University / University of Washington

98

Thanks!• Perry Cook• Dan Trueman• Dan Morris• Ken Steiglitz• Adam Finkelstein• Szymon Rusinkiewicz• Michelle Nagai• Cameron Britt• Konrad Kaczmarek• Michael Early• MR Daniel• Anne Hege• Raymond Weitekamp• All the PLOrk students

• Meg Schedel• Andrew McPherson• Barry Threw• Keith McMillen Instruments• Ge Wang• Jeff Snyder• Xiaojuan Ma• Sonya Nikolova• Matt Hoffmann• Merrie Morris• Sumit Basu• Ichiro Fujinaga

• National Science Foundation GRFP

• Francis Lathrop Upton Fellowship

• National Science Foundation grants 0101247 and 0509447

• The Kimberly and Frank H. Moss '71 Research Innovation Fund

• The David A. Gardner '69 Magic Project

• The John D. and Catherine T. MacArthur Foundation

• Everyone else I’m forgetting

Page 99: Rebecca Fiebrink Princeton University / University of Washington

99

Related publications• Fiebrink, R. 2006. An exploration of feature selection as an optimization tool for musical genre

classification. Master’s thesis, McGill University. • Fiebrink, R., P. R. Cook, and D. Trueman. 2009. “Play-along mapping of musical controllers.” Proc.

International Computer Music Conference.• Fiebrink, R., M. Schedel, and B. Threw. 2010. “Constructing a personalizable gesture-recognizer

infrastructure for the K-Bow.” International Conference on Music and Gesture (MG3).• Fiebrink, R., D. Trueman, C. Britt, M. Nagai, K. Kaczmarek, M. Early, M.R. Daniel, A. Hege, and P. R.

Cook. 2010. “Toward understanding human-computer interactions in composing the instrument.” Proc. International Computer Music Conference.

• Fiebrink, R., D. Trueman, and P. R. Cook. 2009. “A meta-instrument for interactive, on-the-fly learning.” Proc. New Interfaces for Musical Expression.

• Fiebrink, R., G. Wang, and P. R. Cook. 2007. “Don't forget the laptop: Using native input capabilities for expressive musical control.” Proc. International Conference on New Interfaces for Musical Expression.

• Fiebrink, R., G. Wang, and P. R. Cook. 2008. “Support for MIR prototyping and real-time applications in the ChucK programming language.” Proc. International Conference on Music Information Retrieval.

• Wang, G., R. Fiebrink, and P. R. Cook. 2007. “Combining analysis and synthesis in the ChucK programming language.” Proc. International Computer Music Conference.

Page 100: Rebecca Fiebrink Princeton University / University of Washington

100

References

• Amershi, S., Fogarty, J., Kapoor, A., and Tan, D. 2010. “Examining Multiple Potential Models in End-User Interactive Concept Learning.” Proc CHI 2010.

• Baker, K., A. Bhandari, and R. Thotakura. 2009. “Designing an Interactive Automatic Document Classification System.” Proc. HCIR 2009, pp. 30–33.

• Fails, Jerry, and Dan Olsen. 2003. “Interactive machine learning.” Proc. IUI, pp. 39–45.• Fels, S. S. and G. E. Hinton. 1993. “Glove-Talk: A neural network interface between a data-glove and a

speech synthesizer.” IEEE Trans. on Neural Networks, vol. 4.• M. Lee, A. Freed, and D. Wessel. 1992. “Neural networks for simultaneous classification and parameter

estimation in musical instrument control.” Adaptive and Learning Systems, vol. 1706, pp. 244-55.• Raphael, Chris. 2001. “A probabilistic expert system for automatic musical accompaniment.” Journal of

Computational and Graphical Statistics, vol. 10, no. 3, pp. 487-512.• Shneiderman, B. 2000. “Creating Creativity: User interfaces for supporting innovation.” ACM Trans. CHI, vol.

7, no. 1, pp. 114–138.• Shneiderman, B. 2007. “Creativity support tools: Accelerating discovery and innovation.” Comm. ACM vol.

50, no. 12, Dec. 2007, pp. 20–32.• Witten, I., and E. Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed. San

Francisco: Morgan Kaufmann.

Page 101: Rebecca Fiebrink Princeton University / University of Washington

101

Training set size – why so small?

• Learning concepts were “easier” ? (i.e., lower sample complexity)

• Users learned to provide the most useful training examples for representing the problem?– like active learning, but the user is in charge

• Users defined the learning concept definition in order to negotiate the tradeoffs between what they wanted and what was possible in a given amount of time to create training data and train the algorithms?

Page 102: Rebecca Fiebrink Princeton University / University of Washington

102

Running models enables users to practice employing them more effectively

• Through practice, they learn to use models more effectively• Users accepted or expected the need to adapt their behaviors

Page 103: Rebecca Fiebrink Princeton University / University of Washington

103

An HCI view on algorithms

• Algorithms afford certain possible interactions, control, and feedback – i.e., they have an innate potential to be useful

• User interfaces can hide or expose these affordances– And can expose them in more or less usable ways

• The Wekinator exploits the fact that supervised learning models can be manipulated through the training dataset

• Algorithms can be made more useful and usable– through more appropriate interfaces– through affording more appropriate interactions, control, and

feedback