Upload
andreas-floros
View
120
Download
2
Tags:
Embed Size (px)
Citation preview
Sound Events and Emotions: Investigating theRelation of Rhythmic Characteristics and Arousal
K. Drossos 1 R. Kotsakis 2 G. Kalliris 2 A. Floros 1
1Digital Audio Processing & Applications Group, Audiovisual Signal ProcessingLaboratory, Dept. of Audiovisual Arts, Ionian University, Corfu, Greece
2Laboratory of Electronic Media, Dept. of Journalism and Mass Communication, AristotleUniversity of Thessaloniki, Thessaloniki, Greece
Agenda
1. IntroductionEveryday lifeSound and emotions
2. Objectives
3. Experimental SequenceExperimental Procedure’s LayoutSound corpus & emotional modelProcessing of SoundsMachine learning tests
4. ResultsFeature evaluationClassification
5. Discussion & Conclusions
6. Feature work
1. IntroductionEveryday lifeSound and emotions
2. Objectives
3. Experimental SequenceExperimental Procedure’s LayoutSound corpus & emotional modelProcessing of SoundsMachine learning tests
4. ResultsFeature evaluationClassification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
StimuliMany stimuli, e.g.
VisualAural
Interaction with stimuliReactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
StimuliMany stimuli, e.g.
VisualAural
Interaction with stimuliReactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
StimuliMany stimuli, e.g.
VisualAural
Interaction with stimuliReactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
StimuliMany stimuli, e.g.
VisualAural
Interaction with stimuliReactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
StimuliMany stimuli, e.g.
VisualAural
Interaction with stimuliReactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
StimuliMany stimuli, e.g.
VisualAural
Structured form of soundGeneral sound
Interaction with stimuliReactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
StimuliMany stimuli, e.g.
VisualAural
Structured form of soundGeneral sound
Interaction with stimuliReactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
MusicStructured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyanceRelation to emotions through (but not olny) MIR and MER
Music Information RetrievalMusic Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
MusicStructured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyanceRelation to emotions through (but not olny) MIR and MER
Music Information RetrievalMusic Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
MusicStructured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyanceRelation to emotions through (but not olny) MIR and MER
Music Information RetrievalMusic Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
MusicStructured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyanceRelation to emotions through (but not olny) MIR and MER
Music Information RetrievalMusic Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
MusicStructured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyanceRelation to emotions through (but not olny) MIR and MER
Music Information RetrievalMusic Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
MusicStructured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyanceRelation to emotions through (but not olny) MIR and MER
Music Information RetrievalMusic Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%In large music data bases
Opposed to typical “Artist/Genre/Year” classificationCategorisation according to emotionRetrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%In large music data bases
Opposed to typical “Artist/Genre/Year” classificationCategorisation according to emotionRetrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%In large music data bases
Opposed to typical “Artist/Genre/Year” classificationCategorisation according to emotionRetrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%In large music data bases
Opposed to typical “Artist/Genre/Year” classificationCategorisation according to emotionRetrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%In large music data bases
Opposed to typical “Artist/Genre/Year” classificationCategorisation according to emotionRetrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%In large music data bases
Opposed to typical “Artist/Genre/Year” classificationCategorisation according to emotionRetrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound eventsNon structured/general form of soundAdditional information regarding:
Source attributesEnvironment attributesSound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound eventsNon structured/general form of soundAdditional information regarding:
Source attributesEnvironment attributesSound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound eventsNon structured/general form of soundAdditional information regarding:
Source attributesEnvironment attributesSound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound eventsNon structured/general form of soundAdditional information regarding:
Source attributesEnvironment attributesSound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound eventsNon structured/general form of soundAdditional information regarding:
Source attributesEnvironment attributesSound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound eventsNon structured/general form of soundAdditional information regarding:
Source attributesEnvironment attributesSound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound eventsUsed in many applications:
Audio interactions/interfacesVideo gamesSoundscapesArtificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound eventsUsed in many applications:
Audio interactions/interfacesVideo gamesSoundscapesArtificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound eventsUsed in many applications:
Audio interactions/interfacesVideo gamesSoundscapesArtificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound eventsUsed in many applications:
Audio interactions/interfacesVideo gamesSoundscapesArtificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound eventsUsed in many applications:
Audio interactions/interfacesVideo gamesSoundscapesArtificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound eventsUsed in many applications:
Audio interactions/interfacesVideo gamesSoundscapesArtificial acoustic environments
Trigger reactions
Elicit emotions
1. IntroductionEveryday lifeSound and emotions
2. Objectives
3. Experimental SequenceExperimental Procedure’s LayoutSound corpus & emotional modelProcessing of SoundsMachine learning tests
4. ResultsFeature evaluationClassification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
DataAural stimuli can elicit emotions
Music is a structured form of soundSound events are not
Music’s rhythm is arousing
Research questionIs sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
DataAural stimuli can elicit emotions
Music is a structured form of soundSound events are not
Music’s rhythm is arousing
Research questionIs sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
DataAural stimuli can elicit emotions
Music is a structured form of soundSound events are not
Music’s rhythm is arousing
Research questionIs sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
DataAural stimuli can elicit emotions
Music is a structured form of soundSound events are not
Music’s rhythm is arousing
Research questionIs sound events’ rhythm arousing too?
1. IntroductionEveryday lifeSound and emotions
2. Objectives
3. Experimental SequenceExperimental Procedure’s LayoutSound corpus & emotional modelProcessing of SoundsMachine learning tests
4. ResultsFeature evaluationClassification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collectionProcessing of sounds
Pre-processingFeature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collectionProcessing of sounds
Pre-processingFeature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collectionProcessing of sounds
Pre-processingFeature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collectionProcessing of sounds
Pre-processingFeature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collectionProcessing of sounds
Pre-processingFeature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collectionProcessing of sounds
Pre-processingFeature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound CorpusInternational Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional modelContinuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound CorpusInternational Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional modelContinuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound CorpusInternational Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional modelContinuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound CorpusInternational Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional modelContinuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound CorpusInternational Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional modelContinuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound CorpusInternational Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional modelContinuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedureNormalisation
Segmentation / WindowingClustering in two classes
Class A: 24 membersClass B: 143 members
Segmentation/WindowingDifferent window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedureNormalisation
Segmentation / WindowingClustering in two classes
Class A: 24 membersClass B: 143 members
Segmentation/WindowingDifferent window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedureNormalisation
Segmentation / WindowingClustering in two classes
Class A: 24 membersClass B: 143 members
Segmentation/WindowingDifferent window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedureNormalisation
Segmentation / WindowingClustering in two classes
Class A: 24 membersClass B: 143 members
Segmentation/WindowingDifferent window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedureNormalisation
Segmentation / WindowingClustering in two classes
Class A: 24 membersClass B: 143 members
Segmentation/WindowingDifferent window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedureNormalisation
Segmentation / WindowingClustering in two classes
Class A: 24 membersClass B: 143 members
Segmentation/WindowingDifferent window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted featuresOnly rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted featuresOnly rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted featuresOnly rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted featuresOnly rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted featuresOnly rhythmrelated features
For eachsegment
Statisticalmeasures
Extracted Features Statistical MeasuresBeat spectrum MeanOnsets Standard deviationTempo GradientFluctuation KurtosisEvent density SkewnessPulse clarity
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
ObjectivesMost valuable features for arousal detection
Dependencies between selected features and different windowlengths
Algorithms usedInfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
ObjectivesMost valuable features for arousal detection
Dependencies between selected features and different windowlengths
Algorithms usedInfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
ObjectivesMost valuable features for arousal detection
Dependencies between selected features and different windowlengths
Algorithms usedInfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
ObjectivesArousal classification based on rhythm
Dependencies between different window lengths andclassification results
Algorithms usedArtificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
ObjectivesArousal classification based on rhythm
Dependencies between different window lengths andclassification results
Algorithms usedArtificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
ObjectivesArousal classification based on rhythm
Dependencies between different window lengths andclassification results
Algorithms usedArtificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
1. IntroductionEveryday lifeSound and emotions
2. Objectives
3. Experimental SequenceExperimental Procedure’s LayoutSound corpus & emotional modelProcessing of SoundsMachine learning tests
4. ResultsFeature evaluationClassification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General resultsTwo groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General resultsTwo groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General resultsTwo groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General resultsTwo groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
Upper group
Beatspectrum std
Event density std
Onsets gradient
Fluctuation kurtosis
Beatspectrum gradient
Pulse clarity std
Fluctuation mean
Fluctuation std
Fluctuation skewness
Onsets skewness
Pulse clarity kurtosis
Event density kurtosis
Onsets mean
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General resultsRelative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General resultsRelative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General resultsRelative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General resultsRelative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%Window length: 1.0 secondAlgorithm utilised: Logistic regression
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General resultsRelative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%Window length: 1.0 secondAlgorithm utilised: Logistic regression
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General resultsRelative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%Window length: 1.0 secondAlgorithm utilised: Logistic regression
Lowest accuracy score: 71.26%Window length: 1.4 secondsAlgorithm utilised: Artificial Neural network
1. IntroductionEveryday lifeSound and emotions
2. Objectives
3. Experimental SequenceExperimental Procedure’s LayoutSound corpus & emotional modelProcessing of SoundsMachine learning tests
4. ResultsFeature evaluationClassification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
FeaturesMost informative features:
Rhythm’s periodicity at auditory channel (fluctuation)OnsetsEvent densityBeatspecturmPulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
FeaturesMost informative features:
Rhythm’s periodicity at auditory channel (fluctuation)OnsetsEvent densityBeatspecturmPulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
FeaturesMost informative features:
Rhythm’s periodicity at auditory channel (fluctuation)OnsetsEvent densityBeatspecturmPulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
FeaturesMost informative features:
Rhythm’s periodicity at auditory channel (fluctuation)OnsetsEvent densityBeatspecturmPulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
FeaturesMost informative features:
Rhythm’s periodicity at auditory channel (fluctuation)OnsetsEvent densityBeatspecturmPulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
FeaturesMost informative features:
Rhythm’s periodicity at auditory channel (fluctuation)OnsetsEvent densityBeatspecturmPulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results relatedSound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results relatedSound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results relatedSound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results relatedSound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
1. IntroductionEveryday lifeSound and emotions
2. Objectives
3. Experimental SequenceExperimental Procedure’s LayoutSound corpus & emotional modelProcessing of SoundsMachine learning tests
4. ResultsFeature evaluationClassification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature work
Features & DimensionsOther features related to arousal
Connection of features with valence
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature work
Features & DimensionsOther features related to arousal
Connection of features with valence
Thank you!