14
Visual sentiment topic model based microblog image sentiment analysis Donglin Cao & Rongrong Ji & Dazhen Lin & Shaozi Li Received: 5 May 2014 /Revised: 19 October 2014 /Accepted: 22 October 2014 # Springer Science+Business Media New York 2014 Abstract With a growing number of images being used to express opinions in Microblog, text based sentiment analysis is not enough to understand the sentiments of users. To obtain the sentiments implied in Microblog images, we propose a Visual Sentiment Topic Model (VSTM) which gathers images in the same Microblog topic to enhance the visual sentiment analysis results. First, we obtain the visual sentiment features by using Visual Sentiment Ontology (VSO); then, we build a Visual Sentiment Topic Model by using all images in the same topic; finally, we choose better visual sentiment features according to the visual sentiment features distribution in a topic. The best advantage of our approach is that the discriminative visual sentiment ontology features are selected according to the sentiment topic model. The experiment results show that the performance of our approach is better than VSO based model. Keywords Visual sentiment topic model . Visual sentiment ontology . Sentiment analysis 1 Introduction In recent years, multimedia content, especially images, are often used in social media. Hundreds of millions of people use Microblog to share their own opinions in Internet. Because of the limitation of 140 words and intuition of images in Microblog, most of users are willing to express opinions by posting images. For example, in Fig. 1a, a user posted a photo of friends on Microblog without a word, and the text I am herewas generated by Multimed Tools Appl DOI 10.1007/s11042-014-2337-z D. Cao : R. Ji : D. Lin : S. Li Cognitive Science Department, Xiamen University, Xiamen 361005, China D. Cao e-mail: [email protected] D. Lin e-mail: [email protected] S. Li e-mail: [email protected] D. Cao : R. Ji (*) : D. Lin : S. Li Fujian Key Laboratory of the Brain-like Intelligent Systems, Xiamen 361005, China e-mail: [email protected]

Visual sentiment topic model based microblog image sentiment analysis

  • Upload
    shaozi

  • View
    225

  • Download
    6

Embed Size (px)

Citation preview

Page 1: Visual sentiment topic model based microblog image sentiment analysis

Visual sentiment topic model based microblog imagesentiment analysis

Donglin Cao & Rongrong Ji & Dazhen Lin & Shaozi Li

Received: 5 May 2014 /Revised: 19 October 2014 /Accepted: 22 October 2014# Springer Science+Business Media New York 2014

Abstract With a growing number of images being used to express opinions in Microblog, textbased sentiment analysis is not enough to understand the sentiments of users. To obtain thesentiments implied in Microblog images, we propose a Visual Sentiment Topic Model(VSTM) which gathers images in the same Microblog topic to enhance the visual sentimentanalysis results. First, we obtain the visual sentiment features by using Visual SentimentOntology (VSO); then, we build a Visual Sentiment Topic Model by using all images in thesame topic; finally, we choose better visual sentiment features according to the visualsentiment features distribution in a topic. The best advantage of our approach is that thediscriminative visual sentiment ontology features are selected according to the sentiment topicmodel. The experiment results show that the performance of our approach is better than VSObased model.

Keywords Visual sentiment topicmodel . Visual sentiment ontology. Sentiment analysis

1 Introduction

In recent years, multimedia content, especially images, are often used in social media.Hundreds of millions of people use Microblog to share their own opinions in Internet.Because of the limitation of 140 words and intuition of images in Microblog, most of usersare willing to express opinions by posting images. For example, in Fig. 1a, a user posted aphoto of friends on Microblog without a word, and the text ‘I am here’ was generated by

Multimed Tools ApplDOI 10.1007/s11042-014-2337-z

D. Cao : R. Ji : D. Lin : S. LiCognitive Science Department, Xiamen University, Xiamen 361005, China

D. Caoe-mail: [email protected]

D. Line-mail: [email protected]

S. Lie-mail: [email protected]

D. Cao : R. Ji (*) : D. Lin : S. LiFujian Key Laboratory of the Brain-like Intelligent Systems, Xiamen 361005, Chinae-mail: [email protected]

Page 2: Visual sentiment topic model based microblog image sentiment analysis

Microblog system. All contents about this post are hidden in the photo, and leads to thetrouble of traditional text based sentiment analysis. Therefore, visual sentiment analysisbecomes more important for product marketing analysis and government sentiment anal-ysis in Microblog.

However, visual sentiment analysis is just in the beginning stages, and it is moredifficult than text based sentiment analysis. Firstly, unlike the text semantic, the visualsemantic is hidden in images. Secondly, visual sentiment is a kind of high level visualsemantic, and there is no high level visual semantic dictionary like Wordnet (a famoussemantic dictionary for text analysis). To obtain visual sentiments from images, weneed to extract high level visual semantic ontology features instead of low level visualfeatures. Fortunately, Borth and Ji [5] proposed an Adjective Noun Pairs (ANPs) baseddetector to construct a large-scale Visual Sentiment Ontology (VSO) which shows apowerful representation in visual sentiment analysis. Although this method obtains agood performance, there exist some problems in Microblog image analysis. Firstly,ANPs cannot indicate which ANP highly relates to the main sentiment of an image.Because each Microblog image relates to a topic which is the main semantic meaningexpressed by users, topic irrelevant visual sentiments are noises. For example, manypeople take smiling photos at night. In this scene, dark night and simile are twodifferent visual sentiments. In general, simile contains positive sentiment and darknight contains negative sentiment. As we can understand, simile relates to the maintopic of this kind of images, while dark night is the background which is far from themain topic of the scene. Secondly, Microblog images are relevant in the same topic, butVSO based model cannot fuse multi-image information. Unfortunately, there is nomethod for choosing better ANPs from multi-image sentiment information in the sametopic. To solve the above problems, we focus on visual sentiment topic modeling whichgathers images in the same Microblog topic to enhance the visual sentiment analysisresults.

Our contributions include: Firstly, we give the Visual Sentiment Topic Model to furtherunderstand the sentiment of an image through topic. Secondly, we demonstrate that VSTM isuseful for understanding the visual semantic of images.

The rest of the paper is organized as follows. Sec.2 describes related work. Our VisualSentiment Topic Model is elaborated in Sec.3. Experiments and analysis are given in Sec.4.Sec.5 concludes the paper.

Fig. 1 Two examples of microblog images

Multimed Tools Appl

Page 3: Visual sentiment topic model based microblog image sentiment analysis

2 Related work

With the development of Internet, a growing number of people like to express sentiment inInternet. Traditional sentiment research works focus on text based sentiment analysis becausetyping words is the most common way of posting opinion. According to the granularity ofanalysis, text based sentiment research works can be divided into three levels which aredocument level [21], sentence level [14, 30] and entity level [20, 27]. And sentiment analysishas been applied into many areas, including sales performance prediction [13, 15, 19], stockanalysis [7], poll analysis and prediction [4, 28].

In recent years, social network applications become the most important Internet applica-tions. Thus, a growing number of researchers focus on how to analysis the sentiment in socialnetwork circumstance. Yano et al. [33] proposed a novel method to predict comment volumesof political blogs. In social relation analysis, Groh et al. [11] studied the social relation fromsentiment analysis view. Go et al. [1] classified the messages as either positive or negativeusing tweets labeled with distant supervised learning method. Jiang et al. [16] incorporatedtarget-dependent features and took related tweets into consideration.

Besides the text based sentiment analysis, the image based sentiment analysis becomes importantbecause of the explosive growth of using images in social media. In this hot research field, classicalresearch works fall into three areas which are aesthetics [6, 12, 18], emotion detection [2, 3, 17,22–29, 32, 34, 35, 37] and sentiment ontology [5]. Emotion detection use low-level image featuresto detect the emotion appearing in an image. J. Machajdik et al. [17] extracted and combined low-level features to represent the emotional content of an image. Yanulevskaya et al. [34, 35] proposedan emotion categorization system by using low-level features. Zhao et al. [37] used affective analysisin video recommendation. Lu et al. [32] studied the relationship between shapes and emotions. Zhaoet al. [22, 25, 26] proposed a principles-of-art-based emotion features (PAEF) which achieved goodperformance in affective image classification and affective image retrieval. However, low-levelimage features are limited in the large-scale image sentiment detection. To solve that problem,VisualSentiment Ontology (VSO) was proposed by Borth and Ji [5]. The contribution of Borth’s work isusing high-level ANPs which are strongly relevant with sentiment instead of the low-level features.Furthermore, to achieve better sentiment results from texts and images, some useful featuresextracted from image retrieval methods [8–10, 36] can be used in ANPs learning.

Although text based sentiment analysis achieves a great success, image based sentimentanalysis is just in the beginning stages. All image based sentiment analysis approaches focuson how to obtain good visual sentiment features from single image and ignore the relation ofimages in the same topic. However, images are connected by topics in Microblog and othersimilar social network applications. Thus, in this paper, we focus on how to find thediscriminative visual sentiment features from topics and propose a Visual Sentiment TopicModel based approach to solve that problem.

3 Visual sentiment topic model

In this section, we give the details of VSTM. This model is based on the fact that most ofimages in the same topic are relevant to the semantic of the topic. Thus, we can use multi-image sentiment information to help extracting the main visual sentiment information in animage. Firstly, we obtain visual sentiment features from images; secondly, we gather all imagesin the same topic to build a topic model which describes the distribution of visual sentimentfeatures in a topic; finally, we compare each topic model with background model to obtain thediscriminative visual sentiment features.

Multimed Tools Appl

Page 4: Visual sentiment topic model based microblog image sentiment analysis

3.1 Visual sentiment ontology based model

Visual Sentiment Ontology (VSO) was proposed by Borth and Ji. The advantage of thisapproach is that VSO covers many categories in some well-known visual ontology, such asLSCOM and ImageNet, and its detectors can be applied to new data domains, such asMicroblog. Our VSTM is based on this model.

Borth and Ji used VSO to construct SentiBank, and predicated the sentiment of an image bySentiBank. SentiBank is a library of trained concept detectors providing a mid-level visualrepresentation. Therefore, SentiBank is more abstract than low-level features, and it is good forrepresenting the sentiment of an image. The framework of building SentiBank is shown inFig. 2. Firstly, ANPs with sentiment are extracted from annotations of images. Secondly, ANPsare used for training VSO detectors. Thirdly, 1200 detectors are selected to constructSentiBank. Finally, SentiBank is used for sentiment prediction.

Furthermore, for an image, the sentiment prediction process can be done in the following(Fig. 2):

Step 1 Use SentiBank to obtain a 1,200 dimension visual sentiment feature response.Step 2 Train a classifier to learn the weight of each visual sentiment feature for all sentiment

categories.Step 3 Use trained classifier to predicate the sentiment category.

In the above three steps, SentiBank is the key of VSO model. However, VSO model missesthe relevance between images in the same topic. For different kind of topics, the discriminativevisual sentiment features are different. For example, Chinese people like chrysanthemums inimages, while Italians dislike them.

3.2 Visual sentiment topic

In previous section, we briefly introduce VSO based model. However, there exist two mainproblems. Firstly, VSO based model cannot indicate which ANP highly relates to the mainsentiment of an image. Secondly, Microblog images are relevant in the same topic, but VSObased model cannot fuse multi-image information in sentiment prediction.

To address these problems, we want to use multi-image information in the same topic toconstruct Visual Sentiment Topic (VST). Here, VST means visual sentiment ontology infor-mation for a topic, and VST is totally different from the traditional concept of Visual Topic

Fig. 2 Framework of VSO based sentiment prediction

Multimed Tools Appl

Page 5: Visual sentiment topic model based microblog image sentiment analysis

(VT) [31]. VT is defined as the semantic correlated set of keywords and their correspondingvisual content. Thus, VT concerns about the important visual objects in an image, while VSTfocuses on the important visual sentiment ontology in an image. The reason of proposing theconcept of VST instead of using VT is that the important visual objects in VT maybe havenothing to do with the main sentiment information in a topic. For example, Fig. 1b showsserious smog. However, the important objects in this figure (windows, buildings and cars) areirrelevant with the negative sentiment about smog, because smog is treated as background inVT.

Because VST expresses the visual sentiment ontology information, we use VSObased SentiBank to construct the VST. The construction process is shown as follows(Fig. 3):

Step 1 Use VSO to train SentiBank detectors.Step 2 Gather all images in a topic.Step 3 Use SentiBank detectors to obtain the basic visual sentiment features.Step 4 Use all visual sentiment features to build a statistical model of VST.

3.3 Topic model and background topic model

In the previous section, we give the basic idea of VSTM. Furthermore, we detail our model inthis section. First of all, the basic notation is defined as follows:

For an image I, assume that I belongs to a topic T. Furthermore, we have m samples (orimages) of the topic T, noted as T={I1,I2,......,Im} . After using SentiBank, we obtain n visualsentiment features of I denoted as {vf1,vf2,......,vfn} and the response of vfi provided bySentiBank is denoted as rfi.

The goal of VSTM is finding the discriminative sentiment features. Thus, if we obtain thetopic model θT, we can estimate the relevance of each visual sentiment features by p(vfi|θT) .

Fig. 3 Visual sentiment topic construction

Multimed Tools Appl

Page 6: Visual sentiment topic model based microblog image sentiment analysis

However, we only have m samples of the topic T. The question is how to estimate θT. By usingthe Maximum Likelihood Estimation, the problem is formulated as follows:

θMLT ¼ arg maxθ L θT jTð Þ ¼ arg maxθ p T jθTð Þ ð1Þ

Suppose that each visual sentiment feature accepts the multinomial distribution. Thus,equation 1 is rewritten as follows:

θMLT ¼ arg maxθ n! ∏

v fi ∈T

p vf ijθTð Þc v fi;θTð Þ

c vf i; θTð Þ! ð2Þ

Where c(vfi,θT) denotes the sentiment response of vfi in the topic T, n is the number ofimages included in the topic T.

After solving the extreme point of equation 2, p(vfi|θT) is estimated as follows:

p vfijθTð Þ ¼

X

I∈Tr fi

X

I∈T

X

j¼1

n

r f j

ð3Þ

In the similar way, the probability of vfi in the image I can be estimated as follows:

p vfijθIð Þ ¼ r fiX

j¼1

n

r f j

ð4Þ

Although we can estimate each VST directly, there is still a problem in selecting the mostdiscriminative sentiment features because some non-discriminative sentiment features have ahigh probability in the most of topics. Therefore, we need a background topic model todistinguish the non-discriminative and discriminative sentiment features.

Like the above topic model, the background topic model θB is also estimated as follows:

p vfijθBð Þ ¼

X

I∈Br fi

X

I∈B

X

j¼1

n

r f j

ð5Þ

Based on the topic T and the background topic B, the relevance or discrimination of eachvisual sentiment feature in the topic T is estimated as follows:

rel v fijTð Þ ¼ p vfijθIð Þlog p vfijθTð Þp vfijθBð Þ

� �; and I ∈T ð6Þ

Based on the above model, the process of sentiment prediction of VSTM is shown in thefollowing (Fig. 4):

Step 1 Compute the sentiment features of an image.Step 2 Compare the obtained sentiment features with visual sentiment topic features to find

topic relevant sentiment features (discriminative sentiment features).Step 3 Train a classifier to determine the visual sentiment of an image.

Multimed Tools Appl

Page 7: Visual sentiment topic model based microblog image sentiment analysis

4 Experiment and analysis

In this section, we give details of experiments, including experiment setup, visual sentimentprediction results and some cases of results.

4.1 Experiment setup

To test the performance of VSTM, we crawled 4,088 images which come from 44 hot topics(Table 1) in Sina Microblog, the largest Microblog media in China. All images are labeled byfive students according to their judgments and the voting method is used to solve theinconsistent arguments.

All images are labelled in three sentiment categories including positive, negative andneutral. The distribution of data is listed in Table 2. The measurement of evaluation is theprecision of sentiment prediction.

precision ¼ number of true prediction

number of true predictionþ false prediction

All experiments are performed in WEKAwith 10 fold cross-validation.

Fig. 4 VSTM based sentiment prediction

Table 1 Experiment datasetdetails Topic category Number of Topic Number of Image

Television Programme 2 352

Teleplay 9 1,506

Movie 5 323

Technology 3 203

Society 4 191

Music 7 970

Game 1 293

Book 1 179

Entertainment 12 71

Multimed Tools Appl

Page 8: Visual sentiment topic model based microblog image sentiment analysis

4.2 Visual sentiment prediction

To show the performance of VSTM, we choose VSOMas our baseline, and choose two classicalclassification methods, NaiveBayes and SVM, to test whether the selected sentiment features areuseful for different classification methods. Experiment results are shown in Table 3.tgroup

Experiment results show two encouraging messages. Firstly, VSTM outperforms VSOM.VSTM achieves 57.045 and 58.9041 % precision which are higher than VSOM in two classicalclassificationmethods. Secondly, the sentiment features selected byVSTMare useful for differentclassification methods. No matter NaiveBayes or SVM, both of them achieve a significationimprovement. The precision of NaiveBayes is improved from 48.7769 to 57.045 % and theprecision of SVM is improved from 54.2808 to 58.9041 %. The reason behind these improve-ments is that VSTM selects more discriminative visual sentiment ontology features.

Furthermore, we give the performance comparison in all 44 topics (Fig. 5). VSTM withSVM achieves the best performance in all 44 topics, and obtains improvements in 80 and100 % topics for NaiveBayes and SVM respectively. These results show that VSTM improvesthe visual sentiment prediction performance for the most of topics.

4.3 Case study

To show the advantage of VSTM, we give two cases about the performance of VSTM. In thefirst case, we show the connection between VST and images. In the second case, we showsome examples of sentiment prediction.

4.3.1 Case 1

In this case, we select four images (Fig. 6) from Sina Microblog to explain why visualsentiment topic information is useful in determining the discriminative visual sentimentontology features.

In Fig. 6, image 1 and 2 belong to the same topic “typhoon ‘TianTu’” and image 3 and 4 arenot relevant to this topic. Here, image 1 shows the trend of typhoon in satellite cloud picture;image 2 shows a withered flower during typhoon; image 3 shows a photo of trees; image 4shows a photo of sunset. Statistics of all four images and visual sentiment topic “typhoon‘TianTu’” are shown in Fig. 6e. In this figure, visual sentiment features are numberedaccording to the ascending order of the KL divergence rank in the topic. Since each image

Table 2 Sentiment distributionCategory Number

Positive 2,220

Negative 897

Neutral 971

Table 3 Visual sentimentprediction performance(Bold font indicates thebest performance)

Method Precision

VSOM+NaiveBayes 48.7769 %

VSOM+SVM 54.2808 %

VSTM+NaiveBayes 57.045 %

VSTM+SVM 58.9041 %

Multimed Tools Appl

Page 9: Visual sentiment topic model based microblog image sentiment analysis

contains 1200 visual sentiment features, line chart with all data point looks in a mass. Wecalculate the trend line of visual sentiment features for each image by six-order polynomialfitting, and then we compare them with visual sentiment topic features in ascending order. Theresults show that curves of image 1 and 2 match the ascending trend of topic, while image 3and 4 are totally different from the ascending trend of topic.

To give a direct difference comparison of four images with the topic, we compute the KLdivergence of four images with the topic (Table 4). The results also show that image 1 and 2are closer to the visual sentiment topic than image 3 and 4.

Fig. 5 Visual sentiment prediction performance in 44 topics

(a) Image 1 (b) Image 2 (c) Image 3 Image 4

(e) Visual sentiment feature distribution of four images and the topic

(d )

Fig. 6 Four image samples and their visual sentiment feature distribution comparison with the visual sentimenttopic (image 1 and 2 belong to the topic)

Multimed Tools Appl

Page 10: Visual sentiment topic model based microblog image sentiment analysis

4.3.2 Case 2

In the following, we give 6 images (Table 5) to show the sentiment prediction performance.In Table 5, we compare VSTM and VSOM. Results show that both VSTM and VSOM give

the right sentiment for image 2, 3 and 4. Furthermore, VSTM predicts the right sentiment forimage 1, 5 and 6. After carefully observed all 6 images, we find that sentiments of image 1, 5

Table 4 KL divergence compari-son between four images and a vi-sual sentiment topic (the smaller thebetter)

Kl divergence

Image 1 vs topic 0.067397449

Image 2 vs topic 0.056228995

Image 3 vs topic 0.128471536

Image 4 vs topic 0.155120469

Table 5 Examples of sentiment prediction

Multimed Tools Appl

Page 11: Visual sentiment topic model based microblog image sentiment analysis

and 6 need more semantic understanding. In image 1, negative sentiment about the damage oftyphoon is hidden in the satellite cloud picture. In image 5, all colors of game show a positivesentiment, but in fact it is an image about game failure and shows a negative sentiment. Inimage 6, the rude behavior is not easily detected by the classical compute vision technology.

The above cases show that VSTM is useful in helping an image to find discriminativevisual sentiment ontology features and improve the sentiment prediction results.

5 Conclusion

Detecting the sentiment of social media images is a challenge research work. It needs tounderstand the hidden semantic meaning from images. The state of art VSO method hastrouble in finding the discriminative visual sentiment ontology features of images in the sametopic. To solve that problem, we propose a Visual Sentiment Topic Model based approachwhich gathers images in the same Microblog topic to enhance the visual sentiment analysisresults. The best advantage of our approach is that the discriminative visual sentiment ontologyfeatures are selected according to the visual sentiment topic information. The experimentresults show that the performance of our approach is better than VSO based model.

Acknowledgments This work was supported by National Nature Science Foundation of China (No.61402386,No. 61305061 and No. 61202143), the Nature Science Foundation of Fujian Province (No. 2014 J01249 and No.2011 J01367), Doctoral Program Foundation of Institutions of Higher Education of China(No.20090121110032), Shenzhen Science and Technology Research Foundation (No.JC200903180630A) andSpecial Fund for Developing Shenzhen’s Strategic Emerging Industries (No. JCYJ20120614164600201).

References

1. Alec Go, Richa Bhayani, Lei Huang (2009) Twitter Sentiment Classification using Distant Supervision2. Bing Li, Songhe Feng, Weihua Xiong and Weiming Hu (2012) Scaring or Pleasing: Exploit Emotional

Impact of an Image. Proceedings of the 20th ACM international conference on Multimedia (MM), Pages:1365–1366

3. Bing Li, Weihua Xiong, Weiming Hu, and Xinmiao Ding (2012) Context-aware affective images classifi-cation based on bilayer sparse representation, ACM MM, Pages: 721–724

4. Chen, Bi, Leilei Zhu, Daniel Kifer, and Dongwon Lee (2010) What is an sentiment about? exploring politicalstandpoints using sentiment scoring model. in Proceeedings of AAAI Conference on Artificial Intelligence(AAAI-2010)

5. Damian Borth, Rongrong Ji, Tao Chen (2013) Thomas Breuel and Shih-Fu Chang. Large-scale VisualSentiment Ontology and Detectors Using Adjective Noun Pairs. Proceedings of the 21th ACM internationalconference on Multimedia (MM), Pages: 223–232

6. Datta R, Joshi D, Li J, and Wang J (2006) Studying Aesthetics in Photographic Images using aComputational Approach. ECCV, 2006, Pages: 288–301

7. Feldman, Ronen, Benjamin Rosenfeld, Roy Bar-Haim, and Moshe Fresko (2011) The Stock Sonar -Sentiment Analysis of Stocks Based on a Hybrid Approach. in Proceedings of 23rd IAAI Conference onArtificial Intelligence (IAAI-2011)

8. Gao Y, Tang J, Hong R, Yan S, Dai Q, Zhang N, Chua T-S (2012) Camera constraint-free view-based 3Dobject retrieval. IEEE Trans Image Process 21(4):2269–2281

9. Gao Y, Wang M, Zha Z, Shen J, Li X, Xindong W (2013) Visual-textual joint relevance learning for tag-based social image search. IEEE Trans Image Process 22(1):363–376

10. Gao Y, Wang M, Zha Z, Tian Q, Dai Q, Zhang N (2011) Less is more: efficient 3D object retrieval withquery view selection. IEEE Trans Multimedia 11(5):1007–1018

11. Groh, Georg and Jan Hauffa (2011) Characterizing Social Relations Via NLPbased Sentiment Analysis. inProceedings of the Fifth International AAAI Conference on Weblogs and Social Media (ICWSM-2011)

Multimed Tools Appl

Page 12: Visual sentiment topic model based microblog image sentiment analysis

12. Jia Jia, Sen Wu, Xiaohui Wang, Peiyun Hu, Lianhong Cai and Jie Tang (2012) Can we understand vanGogh’s Mood?: Learning to infer Affects from Images in Social Networks. Proceedings of the 20th ACMinternational conference on Multimedia (MM), Pages: 857–860

13. Joshi, Mahesh, Dipanjan Das, Kevin Gimpel, and Noah A. Smith (2010) Movie reviews and revenues: Anexperiment in text regression. in Proceedings of the North American Chapter of the Association forComputational Linguistics Human Language Technologies Conference (NAACL 2010)

14. Lanjun Zhou, Binyang Li, Wei Gao, Zhongyu Wei and Kam-Fai Wong (2011) Unsupervised discovery ofdiscourse relations for eliminating intra-sentence polarity ambiguities. Proceedings of the Conference onEmpirical Methods in Natural Language Processing (EMNLP), Pages: 162–171

15. Liu, Jingjing, Yunbo Cao, Chin-Yew Lin, Yalou Huang, and Ming Zhou (2007) Low-quality product reviewdetection in sentiment summarization. In Proceedings of the Joint Conferenceon Empirical Methods inNatural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL-2007)

16. Long Jiang and Mo Yu (2011) Target-dependent Twitter Sentiment Classification. In ACL, 2011, Pages:151–160

17. Machajdik J, and Hanbury A (2010) Affective Image Classification using Features inspired by Psychologyand Art Theory. ACM Multimedia, 2010, Pages: 83–92

18. Marchesotti L, Perronnin F, Larlus D, and Csurka G (2011) Assessing the Aesthetic Quality of Photographsusing Generic Image Descriptors. ICCV, 2011, Pages: 1784–1791

19. McGlohon, Mary, Natalie Glance, and Zach Reiter (2010) Star quality: Aggregating reviews to rank productsand merchants. in Proceedings of the International Conference onWeblogs and Social Media (ICWSM-2010)

20. Minqing Hu and Bing Liu (2004) Mining and summarizing customer reviews. Proceedings of ACMSIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), Pages: 168–177

21. Pang, Bo, Lillian Lee, and Shivakumar Vaithyanathan (2002) Thumbs up? sentiment classification usingmachine learning techniques. Proceedings of Conference on Empirical Methods in Natural LanguageProcessing (EMNLP), Pages: 79–86

22. Sicheng Zhao, Hongxun Yao, Fanglin Wang, Xiaolei Jiang, Wei Zhang (2014) Emotion Based ImageMusicalizatioin. IEEE International Conference on Multimedia & Expo Workshops

23. Sicheng Zhao, Hongxun Yao, Xiaoshuai Sun, Pengfei Xu, Xianming Liu, and Rongrong Ji (2011) Videoindexing and recommendation based on affective analysis of viewers, ACM MM, Pages: 1473–1476

24. Sicheng Zhao, Hongxun Yao, Xiaoshuai Sun, Xiaolei Jiang, Pengfei Xu (2013) Flexible presentation ofvideos based on affective content analysis, International Conference on Multimedia Modeling

25. Sicheng Zhao, Hongxun Yao, You Yang, Yanhao Zhang (2014) Affective Image Retrieval via Multi-GraphLearning. ACM International Conference on Multimedia

26. Sicheng Zhao, Yue Gao, Xiaolei Jiang, Hongxun Yao, Tat-Seng Chua, Xiaoshuai Sun (2014) ExploringPrinciples-of-Art Features For Image Emotion Recognition. ACM International Conference on Multimedia

27. Theresa Wilson, Janyce Wiebe and Paul Hoffmann (2005) Recognizing contextual polarity in phrase-levelsentiment analysis. Proceedings of the conference on Human Language Technology and Empirical Methodsin Natural Language Processing (HLT), Pages: 347–354

28. Tumasjan, Andranik, Timm O. Sprenger, Philipp G. Sandner, and Isabell M. Welpe (2010) Predictingelections with twitter: What 140 characters reveal about political sentiment. in roceedings of theInternational Conference on Weblogs and Social Media (ICWSM-2010)

29. Vassilios Vonikakis and Stefan Winkler (2012) Emotion-based Sequence of Family Photos. Proceedings ofthe 20th ACM international conference on Multimedia (MM), Pages: 1371–1372

30. Wilson, Theresa, Janyce Wiebe, and Rebecca Hwa (2004) Just how mad are you? Finding strong and weaksentiment clauses. Proceedings of National Conference on Artificial Intelligence (AAAI), Pages: 761–767

31. Xianming Liu, Hongxun Yao, Rongrong Ji, Pengfei Xu, Xiaoshuai Sun and Qi Tian (2010) Visual TopicModel for Web Image Annotation. ICIMCS’10

32. Xin Lu, Poonam Suryanarayan, Reginald B. Adams, Jr., Jia Li, Michelle G. Newman, and James Z. Wang(2012) On shape and the computability of emotions, ACM MM, Pages: 229–238

33. Yano, Tae and Noah A. Smith (2010) What’s Worthy of Comment? Content and Comment Volume inPolitical Blogs. in Proceedings of the International AAAI Conference on Weblogs and Social Media(ICWSM 2010)

34. Yanulevskaya V, et al (2008) Emotional Valence Categorization using Holistic Image Features. ICIP, 2008,Pages: 101–104

35. Yanulevskaya V, et al (2012) In the Eye of the Beholder: Employing Statistical Analysis and Eye Trackingfor Analyzing Abstract Paintings. ACM MM, 2012, Pages: 349–358

36. Yue Gao, Fanglin Wang, Huabo Luan, Tat-Seng Chua (2014) Brand Data Gathering From Live Social MediaStreams, ACM Conference on Multimedia Retrieval

37. Zhao S, Yao H, Sun X (2013) Video classification and recommendation based on affective analysis ofviewers. Neurocomputing 119:101–110

Multimed Tools Appl

Page 13: Visual sentiment topic model based microblog image sentiment analysis

Donglin Cao is current an assistant professor at the Department of Cognitive Science, School of InformationScience and Engineering, Xiamen University. His research interest is computer vision and multimedia analysis.

Rongrong Ji is current a professor at the Department of Cognitive Science, School of Information Science andEngineering, Xiamen University. His research interest is multimedia computing, content analytics, and application.

Dazhen Lin is current an assistant professor at the Department of Cognitive Science, School of InformationScience and Engineering, Xiamen University. His research interest is information retrieval.

Multimed Tools Appl

Page 14: Visual sentiment topic model based microblog image sentiment analysis

Shaozi Li is current a professor at the Department of Cognitive Science, School of Information Science andEngineering, Xiamen University. His research interest is computer vision.

Multimed Tools Appl