286
COPYRIGHT AND CITATION CONSIDERATIONS FOR THIS THESIS/ DISSERTATION o Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. o NonCommercial — You may not use the material for commercial purposes. o ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. How to cite this thesis Surname, Initial(s). (2012) Title of the thesis or dissertation. PhD. (Chemistry)/ M.Sc. (Physics)/ M.A. (Philosophy)/M.Com. (Finance) etc. [Unpublished]: University of Johannesburg. Retrieved from: https://ujdigispace.uj.ac.za (Accessed: Date).

PerTrust: leveraging personality and trust for group recommendations

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: PerTrust: leveraging personality and trust for group recommendations

COPYRIGHT AND CITATION CONSIDERATIONS FOR THIS THESIS/ DISSERTATION

o Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

o NonCommercial — You may not use the material for commercial purposes.

o ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.

How to cite this thesis

Surname, Initial(s). (2012) Title of the thesis or dissertation. PhD. (Chemistry)/ M.Sc. (Physics)/ M.A. (Philosophy)/M.Com. (Finance) etc. [Unpublished]: University of Johannesburg. Retrieved from: https://ujdigispace.uj.ac.za (Accessed: Date).

Page 2: PerTrust: leveraging personality and trust for group recommendations

PerTrust: Leveraging personality and trust for group

recommendations

by

Justin Sean Leonard

Dissertation submitted in fulfilment of the requirements for the degree

Magister Scientiae

in the subject of

Information Technology

in the

Faculty of Science

at the

University of Johannesburg

Supervisor

Professor Marijke Coetzee

January 2014

Page 3: PerTrust: leveraging personality and trust for group recommendations

ii

Declaration

I, Justin Sean Leonard, hereby declare that:

The work in this dissertation is my own work;

All sources used and referred to have been documented and recognised;

This document has not previously been submitted in full or partial fulfilment of the

requirements for an equivalent or higher qualification at any other recognised educational

institution.

___________________________________

Justin Sean Leonard

Page 4: PerTrust: leveraging personality and trust for group recommendations

Acknowledgements

It is my desire to acknowledge the following people for their contribution and assistance with this

dissertation:

My supervisor, Professor Marijke Coetzee, for her patience, input, guidance, and willingness

to always help. Without her assistance, this dissertation would not be.

My wife, for her love, encouragement and support along the way. Again, this dissertation

would not have been without her by my side through it all.

My family and friends, for their constant help, understanding, and support.

Christ, whose grace is all sufficient in every season of life and who is the faithful enabler of

strength and ability in all things.

Page 5: PerTrust: leveraging personality and trust for group recommendations

ii

Abstract

Recommender systems assist a system user to identify relevant content within a specific context. This

is typically performed through an analysis of a system user’s rating habits and personal preferences

and leveraging these to return one or a number of relevant recommendations. There are numerable

contexts in which recommender systems can be applied, such as movies, tourism, books, and music.

The need for recommender systems has become increasingly relevant, particularly on the Internet.

This is mainly due to the exponential amount of content that is published online on a daily basis. It has

thus become more time consuming and difficult to find pertinent information online, leading to

information overload. The relevance of a recommender system, therefore, is to assist a system user

to overcome the information overload problem by identifying pertinent information on their behalf.

There has been much research done within the recommender system field and how such systems

can best recommend items to an individual user. However, a growing and more recent research area

is how recommender systems can be extended to recommend items to groups, known as group

recommendation. The relevance of group recommendation is that many contexts of recommendation

apply to both individuals and groups. For example, people often watch movies or visit tourist

attractions as part of a group.

Group recommendation is an inherently more complex form of recommendation than individual

recommendation for a number of reasons. The first reason is that the rating habits and personal

preferences of each system user within the group need to be considered. Additionally, these rating

habits and personal preferences can be quite heterogeneous in nature. Therefore, group

recommendation becomes complex because a satisfactory recommendation needs to be one which

meets the preferences of each group member and not just a single group member.

The second reason why group recommendation is considered to be more complex than individual

recommendation is because a group not only includes multiple personal preferences, but also multiple

personality types. This means that a group is more complex from a social perspective. Therefore, a

satisfactory group recommendation needs to be one which considers the varying personality types

and behaviours of the group.

The purpose of this research is to present PerTrust, a generic framework for group recommendation

with the purpose of providing a possible solution to the aforementioned issues noted above. The

primary focus of PerTrust is how to leverage both personality and trust in overcoming these issues.

Page 6: PerTrust: leveraging personality and trust for group recommendations

Contents

Chapter 1: Introduction

1.1 Introduction .......................................................................................................................... 2

1.2 Motivation ............................................................................................................................ 4

1.3 Research methodology ........................................................................................................ 5

1.4 Problem statement ............................................................................................................... 5

1.5 Important terms .................................................................................................................... 6

1.6 Layout of dissertation ........................................................................................................... 7

Chapter 2: Group recommender systems

2.1 Introduction ........................................................................................................................ 12

2.2 Defining a group recommender system .............................................................................. 13

2.2.1 Recommender system ........................................................................................... 13

2.2.2 Group recommender system.................................................................................. 13

2.3 Types of recommender systems ......................................................................................... 14

2.3.1 Content-based recommender systems ................................................................... 14

2.3.2 Collaborative filtering-based recommender systems .............................................. 14

2.3.3 Trust-based recommender systems ....................................................................... 15

2.3.4 Motivating a type of recommender system ............................................................. 15

2.4 Group recommendation ...................................................................................................... 19

2.4.1 Group formation .................................................................................................... 20

2.4.2 Preference elicitation ............................................................................................. 22

2.4.3 Recommendation aggregation ............................................................................... 22

2.5 Related work ...................................................................................................................... 24

2.5.1 Group recommender systems ................................................................................ 25

2.5.2 Trust-based recommender systems ....................................................................... 26

2.6 Requirements for a trust-based group recommender system .............................................. 28

2.7 Conclusion ......................................................................................................................... 29

Chapter 3: Trust network concepts

3.1 Introduction ........................................................................................................................ 32

3.2 Definition ............................................................................................................................ 32

3.3 Visually representing trust networks ................................................................................... 33

3.3.1 Sociogram ............................................................................................................. 33

3.3.2 Activity matrix ........................................................................................................ 33

3.3.3 Motivating a visual representation of trust networks ............................................... 34

3.4 Measuring and representing trust relationships ................................................................... 34

3.4.1 Relationship flow ................................................................................................... 34

3.4.2 Measuring relationship flows .................................................................................. 36

Page 7: PerTrust: leveraging personality and trust for group recommendations

ii

3.5 Types of trust networks ...................................................................................................... 37

3.5.1 Egocentric trust network ........................................................................................ 37

3.5.2 Sociocentric trust network ...................................................................................... 38

3.6 Motivating a egocentric, directed, and interval measured trust network ............................... 39

3.7 Conclusion ......................................................................................................................... 40

Chapter 4: A background to trust in recommender systems

4.1 Introduction ........................................................................................................................ 43

4.2 Defining trust ...................................................................................................................... 44

4.3 Properties of trust ............................................................................................................... 45

4.3.1 Transitivity ............................................................................................................. 45

4.3.2 Composability ........................................................................................................ 47

4.3.3 Personalisation ...................................................................................................... 48

4.3.4 Asymmetry ............................................................................................................ 48

4.4 The implementation of trust in recommender systems ........................................................ 49

4.4.1 Calculating trust at a high level .............................................................................. 49

4.4.2 Calculating trust with a trust metric ........................................................................ 50

4.5 Conclusion ......................................................................................................................... 52

Chapter 5: Trust in recommender systems

5.1 Introduction ........................................................................................................................ 55

5.2 Requirements for a trust algorithm ...................................................................................... 55

5.3 Reference scenario ............................................................................................................ 57

5.4 State-of-the-art trust-based recommendation algorithms ..................................................... 59

5.4.1 Trust-based weighted mean with TidalTrust ........................................................... 60

5.4.2 Trust-based collaborative filtering with MoleTrust ................................................... 65

5.4.3 Trust-based filtering with profile and item level trust ............................................... 70

5.4.4 Structural trust inference algorithm ........................................................................ 77

5.4.5 EnsembleTrustCF.................................................................................................. 83

5.5 Analysis of results .............................................................................................................. 87

5.6 Conclusion ......................................................................................................................... 89

Chapter 6: Empirical evaluation of trust-based algorithms

6.1 Introduction ........................................................................................................................ 91

6.2 Datasets ............................................................................................................................ 91

6.2.1 Background ........................................................................................................... 92

6.2.2 Epinions dataset evaluation ................................................................................... 93

6.3 Measurements ................................................................................................................... 94

6.3.1 Accuracy ............................................................................................................... 94

6.3.2 Coverage .............................................................................................................. 95

Page 8: PerTrust: leveraging personality and trust for group recommendations

iii

6.4 Baseline algorithms ............................................................................................................. 96

6.5 Evaluation ........................................................................................................................... 97

6.6 Evaluation of the results .................................................................................................... 101

6.6.1 Observations on Victor’s (2010) evaluation of the Epinions reviews dataset .......... 101

6.6.2 Observations on Victor’s (2010) evaluation of the Epinions products dataset ........ 102

6.6.3 Observations on O’Doherty’s (2012) evaluation of the Epinions products dataset .. 104

6.6.4 Motivation for a trust-based algorithm ................................................................... 105

6.7 Conclusion ........................................................................................................................ 105

Chapter 7: Group recommendation: preference elicitation

7.1 Introduction ....................................................................................................................... 109

7.2 Scenario ........................................................................................................................... 109

7.3 Background....................................................................................................................... 110

7.4 Adapting the EnsembleTrustCF algorithm ......................................................................... 111

7.4.1 Define prerequisites .............................................................................................. 111

7.4.2 Stepwise preference elicitation process ................................................................ 112

7.5 Example application .......................................................................................................... 113

7.5.1 Defining prerequisites for the algorithm ................................................................. 113

7.5.2 Stepwise preference elicitation process ................................................................ 114

7.6 Conclusion ........................................................................................................................ 117

Chapter 8: Group recommendation: rating prediction

8.1 Introduction ....................................................................................................................... 120

8.2 Scenario ........................................................................................................................... 121

8.3 Personality ........................................................................................................................ 122

8.3.1 Approaches to catering for social influences ......................................................... 123

8.3.2 Motivating the Thomas-Kilmann Instrument (TKI) approach .................................. 124

8.3.3 Determining personality: The TKI test ................................................................... 125

8.3.4 Applying the TKI results ........................................................................................ 127

8.4 Trust ................................................................................................................................. 128

8.5 Rating prediction algorithms combining personality and trust ............................................ 129

8.5.1 Personality-based rating prediction ....................................................................... 129

8.5.2 Delegation-based rating prediction ........................................................................ 130

8.5.3 Influence-based rating prediction .......................................................................... 131

8.6 Empirical evaluation .......................................................................................................... 131

8.6.1 Background to evaluation of rating predication algorithms ..................................... 132

8.6.2 Evaluation metrics ................................................................................................ 133

8.6.3 Test cases ............................................................................................................ 133

8.6.4 Results ................................................................................................................. 134

8.7 Example application .......................................................................................................... 138

Page 9: PerTrust: leveraging personality and trust for group recommendations

iv

8.7.1 Personality ........................................................................................................... 138

8.7.2 Trust ..................................................................................................................... 140

8.7.3 Determining the group recommendation with the TPDBR algorithm ...................... 141

8.8 Conclusion ........................................................................................................................ 144

Chapter 9: Group recommendation: aggregation

9.1 Introduction ....................................................................................................................... 147

9.2 Scenario ........................................................................................................................... 147

9.3 Aggregation models .......................................................................................................... 148

9.3.1 Additive utilitarian model ..................................................................................... 148

9.3.2 Multiplicative utilitarian model ............................................................................ 149

9.3.3 Average model .................................................................................................. 150

9.3.4 Average without misery model ........................................................................... 150

9.3.5 Least misery model ............................................................................................ 151

9.3.6 Most pleasure model ......................................................................................... 152

9.3.7 Fairness model ................................................................................................... 153

9.3.8 Plurality voting model.......................................................................................... 154

9.3.9 Approval voting model ....................................................................................... 155

9.3.10 Borda count model ............................................................................................ 155

9.3.11 Copeland rule model .......................................................................................... 156

9.3.12 Summary ............................................................................................................ 157

9.4 Evaluation of aggregation models...................................................................................... 158

9.4.1 Evaluation results from Masthoff (2011) .............................................................. 159

9.4.2 Evaluation results from Baltrunas et al. (2010) .................................................... 160

9.4.3 Evaluation results from Gartrell et al. (2010) ....................................................... 160

9.4.4 Motivating an aggregation model ........................................................................ 160

9.5 Conclusion ........................................................................................................................ 161

Chapter 10: Group recommendation: satisfaction

10.1 Introduction ..................................................................................................................... 163

10.2 Individual satisfaction ...................................................................................................... 163

10.2.1 Expected search length (ESL) measure (Quijano-Sanchez et al., 2013) .......... 163

10.2.2 Satisfaction measure by Carvalho and Macedo (2013) .................................... 164

10.2.3 Mean absolute error (MAE) measure by Garcia et al. (2012) ........................... 165

10.2.4 Masthoff’s (2004) individual satisfaction function ............................................. 165

10.2.5 Motivating an individual satisfaction function ................................................... 166

10.3 Group satisfaction ........................................................................................................... 167

10.3.1 Measuring group satisfaction .......................................................................... 167

10.4 Example application ........................................................................................................ 169

10.4.1 Individual satisfaction ...................................................................................... 169

Page 10: PerTrust: leveraging personality and trust for group recommendations

v

10.4.2 Group satisfaction ........................................................................................... 171

10.5 Conclusion ..................................................................................................................... 173

Chapter 11: Introducing PerTrust – a personality and trust-based group recommender model

11.1 Introduction ..................................................................................................................... 176

11.2 PerTrust architecture ....................................................................................................... 177

11.3 PerTrust system components .......................................................................................... 178

11.3.1 The group component ..................................................................................... 178

11.3.2 The client component...................................................................................... 179

11.3.3 The group recommendation component .......................................................... 179

11.3.4 The database component ............................................................................... 181

11.4 Conclusion ...................................................................................................................... 182

Chapter 12: The PerTrust model

12.1 Introduction ..................................................................................................................... 184

12.2 Registration components ................................................................................................. 184

12.2.1 Basic information component .......................................................................... 185

12.2.2 Personality information component ................................................................. 185

12.2.3 Social relations information component ........................................................... 188

12.2.4 Rating history information component ............................................................. 191

12.3 Preference elicitation components ................................................................................... 195

12.3.1 Registered user retrieval component ............................................................... 196

12.3.2 Similar and trusted user identification component ............................................ 196

12.3.3 Recommendation retrieval component ............................................................ 199

12.3.4 Top-N recommendation component ................................................................ 201

12.4 Aggregation components................................................................................................. 201

12.4.1 Rating matrix formation component ................................................................. 201

12.4.2 Personality and trust influence component ...................................................... 202

12.4.3 Aggregation model component ....................................................................... 205

12.4.4 Satisfaction component ................................................................................... 205

12.5 Conclusion ...................................................................................................................... 208

Chapter 13: PerTrust evaluation

13.1 Introduction ..................................................................................................................... 210

13.2 Dataset ........................................................................................................................... 211

13.2.1 Selecting a dataset for evaluation ................................................................... 211

13.2.2 Method of capturing data ................................................................................ 212

13.2.3 Limitations of the dataset ................................................................................ 213

13.3 Evaluation considerations and models ............................................................................. 214

13.3.1 Base experiments ........................................................................................... 214

Page 11: PerTrust: leveraging personality and trust for group recommendations

vi

13.3.2 Evaluation models .......................................................................................... 215

13.4 Evaluation results of the PerTrust model using the dataset .............................................. 217

13.4.1 Experiment 1: Overall model performance ...................................................... 217

13.4.2 Experiment 2: Personality and trust ................................................................. 221

13.4.3 Experiment 3: Satisfaction .............................................................................. 224

13.4.4 Summary of results ......................................................................................... 226

13.5 Proposing a configuration for the PerTrust model ............................................................ 226

13.5.1 Top-N and similarity configuration ................................................................... 226

13.5.2 Aggregation configuration ............................................................................... 227

13.6 Conclusion ...................................................................................................................... 228

Chapter 14: PerTrust model prototype

14.1 Introduction ..................................................................................................................... 230

14.2 Background ..................................................................................................................... 230

14.3 Class implementation ...................................................................................................... 233

14.3.1 Registration components ................................................................................ 233

14.3.2 Group recommendation .................................................................................. 240

14.4 Conclusion ...................................................................................................................... 249

Chapter 15: Conclusion

15.1 Introduction ..................................................................................................................... 251

15.2 Reviewing the research objectives .................................................................................. 251

15.3 Research contributions .................................................................................................... 257

15.4 Limitations of the research .............................................................................................. 258

15.5 Further research.............................................................................................................. 260

References ................................................................................................................................... 263

Paper published........................................................................................................................... 271

Page 12: PerTrust: leveraging personality and trust for group recommendations

List of figures

Figure 1.1 Dissertation layout .................................................................................................. 10

Figure 2.1 Group recommendation process at a high level....................................................... 20

Figure 3.1 Basic sociogram representing a relationship between Adam and Ben ..................... 33

Figure 3.2 Sociogram representing a one-way relationship flow between Adam and Ben ......... 35

Figure 3.3 Sociogram representing a mutual relationship flow between Adam and Ben............ 35

Figure 3.4 Sociogram representing an existence relationship flow between Adam and Ben ..... 35

Figure 3.5 Sociogram representing a binary relationship measure between Adam and Ben ..... 36

Figure 3.6 Sociogram representing a signed measure between Adam and Ben ....................... 36

Figure 3.7 Sociogram representing a ranked measure between Adam and Ben ....................... 37

Figure 3.8 Sociogram representing an interval measure between Adam and Ben .................... 37

Figure 3.9 A sociocentric trust network about Adam and his friends ......................................... 38

Figure 3.10 A sociocentric trust network about Adam and his friends ......................................... 39

Figure 3.11 An extended egocentric trust network about Adam and his friends .......................... 40

Figure 4.1 Transitivity property of trust ..................................................................................... 46

Figure 4.2 Trust transitivity - Functional and referral trust ......................................................... 46

Figure 4.3 Composability property of trust ............................................................................... 47

Figure 4.4 Personalisation property of trust.............................................................................. 48

Figure 4.5 Asymmetry property of trust .................................................................................... 49

Figure 4.6 Determining trust between Adam and Franck .......................................................... 50

Figure 5.1 Reference scenario for trust algorithms ................................................................... 58

Figure 5.2 Structural trust inference algorithm - User and item sets ......................................... 79

Figure 6.1 Epinions products dataset rating distribution ........................................................... 94

Figure 7.1 Group recommendation scenario .......................................................................... 110

Figure 7.2 Example application: Generation of Adam’s top-4 preference list .......................... 117

Figure 8.1 TKI personality profiles ......................................................................................... 126

Figure 8.2 Precision test ........................................................................................................ 134

Figure 8.3 Group size test ..................................................................................................... 135

Figure 8.4 Homogeneity test.................................................................................................. 136

Figure 8.5 Trust strength test................................................................................................. 137

Figure 11.1 PerTrust architecture for group recommendation .................................................. 177

Figure 12.1 PerTrust model for group recommendation ........................................................... 184

Figure 13.1 Average one hit match percentage ....................................................................... 218

Figure 13.2 Number of hits per one hit match percentage ........................................................ 218

Figure 13.3 Average two hit match percentage ........................................................................ 220

Figure 13.4 Number of hits per two hit match percentage ........................................................ 220

Figure 13.5 Collaborative filtering-based trust and personality performance ............................. 222

Figure 13.6 Trust-based trust and personality performance ..................................................... 223

Figure 13.7 Average individual user satisfaction ...................................................................... 225

Page 13: PerTrust: leveraging personality and trust for group recommendations

ii

Figure 13.8 Average group satisfaction ................................................................................... 225

Figure 13.9 Trust-PerTrust top-N and similarity configuration ................................................... 227

Figure 13.10 Trust-PerTrust aggregation configuration .............................................................. 228

Figure 14.1 PerTrust responsive web design layout ................................................................. 231

Figure 14.2 PerTrust database – Entity framework .................................................................. 234

Figure 14.3 PerTrust prototype – Basic details component ...................................................... 234

Figure 14.4 Class diagram – Basic details component functionality.......................................... 234

Figure 14.5 PerTrust prototype – Personality information component - TKI test ....................... 235

Figure 14.6 PerTrust prototype – Personality information component - Personality type .......... 236

Figure 14.7 Class Diagram – Personality information component functionality ......................... 236

Figure 14.8 PerTrust prototype – Social relations information component ................................ 237

Figure 14.9 Class Diagram – Social relations component functionality ..................................... 237

Figure 14.10 PerTrust prototype – Social relations component functionality – JSON Notation .... 238

Figure 14.11 PerTrust prototype – Rating history information component .................................. 239

Figure 14.12 Class diagram – Rating history information component functionality ...................... 239

Figure 14.13 PerTrust prototype – Group recommendation – Adding group members ............... 240

Figure 14.14 Class diagram – Generate group recommendation ............................................... 241

Figure 14.15 Class diagram – Preference elicitation component ................................................ 242

Figure 14.16 Class diagram – Aggregation ................................................................................ 246

Figure 14.17 PerTrust prototype – Group recommendation – Final group recommendation ....... 248

Page 14: PerTrust: leveraging personality and trust for group recommendations

List of tables

Table 2.1 Advantages and disadvantages of recommender system types .................................... 16

Table 3.1 Activity matrix representing a trust relationship between Adam and Ben ....................... 34

Table 5.1 Trust-based weighted mean with TidalTrust evaluation ................................................. 65

Table 5.2 Trust-based collaborative filtering with MoleTrust evaluation ......................................... 70

Table 5.3 Trust-based filtering with profile and item level trust evaluation ..................................... 76

Table 5.4 Structural trust influence evaluation .............................................................................. 83

Table 5.5 EnsembleTrustCF evaluation ....................................................................................... 87

Table 5.6 Summary of trust-based algorithm evaluation results .................................................... 88

Table 6.1 Epinions products and reviews dataset ........................................................................ 93

Table 6.2 Baseline algorithms used for evaluation on Epinions dataset ........................................ 97

Table 6.3 Victor’s (2010) results on the Epinions reviews dataset ................................................. 98

Table 6.4 Victor’s (2010) results on the Epinions products dataset ............................................... 99

Table 6.5 O’Doherty’s (2012) results on the Epinions products dataset ...................................... 100

Table 7.1 Preference elicitation: Variables for example application ............................................. 114

Table 7.2 Example application: Rating and trust profiles for Ben, Franck, and Greg.................... 115

Table 7.3 Example application: Ben’s top-4 recommendation list ................................................ 115

Table 7.4 Example application: Ben’s top-4 trust amended recommendation list......................... 115

Table 7.5 Example application: Filtered top-4 recommendations for Ben, Franck, and Greg ....... 116

Table 8.1 Scenario – Adam’s top-4 recommendation list ............................................................ 121

Table 8.2 Scenario - Top-4 recommendation list for each group member ................................... 121

Table 8.3 Scenario – Rating matrix ............................................................................................ 122

Table 8.4 Assertiveness and cooperativeness personality type mappings .................................. 127

Table 8.5 Example application: Dominant and least dominant personalities for Adam’s group .... 138

Table 8.6 Example application: Assertiveness and cooperativeness personality type mappings . 139

Table 8.7 Example application: CMW values for Adam’s group .................................................. 140

Table 8.8 Example application: Group trust scores ..................................................................... 141

Table 8.9 Example application: Group TPDBR scores ................................................................ 143

Table 8.10 Example application: Aggregated TPDBR scores ....................................................... 143

Table 8.11 Example application: Final group recommendation ..................................................... 144

Table 9.1 General aggregation models: Scenario ....................................................................... 147

Table 9.2 General aggregation models: Additive utilitarian model ............................................... 149

Table 9.3 General aggregation models: Multiplicative utilitarian model ....................................... 150

Table 9.4 General aggregation models: Average model ............................................................. 150

Table 9.5 General aggregation models: Average without misery model ...................................... 151

Table 9.6 General aggregation models: Least misery model ...................................................... 152

Table 9.7 General aggregation models: Most pleasure model .................................................... 152

Table 9.8 General aggregation models: Most pleasure model .................................................... 154

Table 9.9 General aggregation models: Approval rating model .................................................. 155

Page 15: PerTrust: leveraging personality and trust for group recommendations

ii

Table 9.10 General aggregation models: Borda count model ..................................................... 156

Table 9.11 General aggregation models: Copeland rule model ................................................... 157

Table 9.12 General aggregation models: Summary of results ...................................................... 158

Table 10.1 Individual satisfaction: Rating valuation mapping table ............................................... 166

Table 10.2 Example application: Adam’s group recommendation ................................................ 169

Table 10.3 Example application: Adam’s top-3 group recommendation ....................................... 169

Table 10.4 Example application: Adam’s personal recommendation list ...................................... 170

Table 10.5 Example application: Adam’s group rating valuation mapping table ........................... 170

Table 12.1 Assertiveness and cooperativeness personality type mappings ................................. 203

Table 12.2 Satisfaction – Numeric rating scale definition ............................................................. 207

Table 13.1 Dataset summary ...................................................................................................... 212

Page 16: PerTrust: leveraging personality and trust for group recommendations

Chapter 1 Introduction

Page 17: PerTrust: leveraging personality and trust for group recommendations

Chapter 1 Introduction

2

1.1 Introduction

Since the inception of the Internet, there has been an exponential increase in the amount of content

available online. This point is noted by Frank Gens (2012) of the International Data Corporation (IDC)

in his report focusing on the top 10 IT predictions for the year 2013. In this report, 1000 IDC analysts

were queried about their IT predictions for the year 2013, with the top 10 outcomes presented (Gens,

2012). One of the predictions of this report is that the total amount of data created and replicated

online will reach an estimated 4 trillion gigabytes (Gens, 2012). This presents a near doubling of the

total data created and replicated online in the year 2012 and almost four times that of the data in the

year 2010 (Gens, 2012). There are two main reasons for this exponential rise in data content online.

The first reason is that there has been a dramatic increase in the number of online users. This is

highlighted in a recent report produced by the International Telecommunication Union (ITU), the

information and communication technologies (ICT) agency of the United Nations. In this report, it is

estimated that a total of 2.7 billion people will be making use of the Internet by the year 2013

(International Telecommunication Union, 2013). In the year 2006, this figure was at 1.17 billion

people, less than half of the estimated number of users in 2013 (International Telecommunication

Union, 2011).

A second reason is the widespread use of social media in uploading content online. In their annual

presentation on the Internet trends for 2013, Meeker and Wu (2013) revealed that more than 500

million photographs are being uploaded and shared daily via social media such as Facebook and

Instagram, 100 hours of video content are being uploaded per minute via YouTube, and 11 hours of

sound content are being uploaded per minute via SoundCloud.

While the availability and access to this content has its advantages, it has resulted in a problem

commonly known as the information overload problem (Bhuiyan, 2011; Massa & Avesani, 2007b;

Massa & Avesani, 2009; Quan & Hinze, 2008; Ricci et al., 2011). This problem states that the large

amount of content online has made it difficult for online users to find content that is both pertinent and

relevant to their preferences and needs (Al Falahi et al., 2012; Bhuiyan, 2011; Massa & Avesani,

2007b; Quan & Hinze, 2008; Ricci et al., 2011). For example, if an online user searches for a tourist

destination to visit, where would they start and whose recommendations would be reliable and

trustworthy? How would an online user know if a particular tourist destination meets their personal

needs and preferences? (Ricci et al., 2011; Victor, 2010)

A solution to the information overload problem is recommender systems (Bhuiyan, 2011; Massa &

Avesani, 2007b). Recommender systems are systems which consider the personal preferences of the

system user in order to provide them with a list of personalised recommendations (Al Falahi et al.,

2012; Lops et al., 2011; Ricci et al., 2011). The system user can then use this list of recommendations

as the basis for a decision or selection (Ricci et al., 2011). An example of a recommender system is

Page 18: PerTrust: leveraging personality and trust for group recommendations

Chapter 1 Introduction

3

the one used by Amazon.com, which recommends product items to a system user based upon the

purchase history of other system users who have bought the same product (Al Falahi et al., 2012;

Lops et al., 2011).

Typically, recommender systems are split into two main categories (Chen et al., 2008). The first

category is individual recommender systems that recommend items to individuals based upon the

preferences of a specific individual (Chen et al., 2008; Popescu & Pu, 2011). This is a well-

researched area of recommender systems with numerous individual recommender system

applications developed (Baltrunas et al., 2010; Herr et al., 2012).

A second category of recommender systems is group recommender systems (Chen et al., 2008).

These recommender systems recommend items to groups of system users based upon the

preferences of each system user individually as well as the entire group collectively (Garcia et al.,

2012; Herr et al., 2012; Kim et al., 2010; Popescu & Pu, 2011). This is a more recent research area

and has evolved out of the consideration that many contexts are relevant to both individual

recommendation as well as group recommendation (Masthoff, 2011; Salamó et al., 2012). An

example of such a context is the recommendation of tourist attractions where the visiting of a tourist

attraction is often done with other people. Because this is a more recent area of research, there are

fewer group recommender systems in comparison to individual recommender systems (Cantador &

Castells, 2012).

In evaluating both categories of recommender systems, it is noted that group recommender systems

are a more challenging category of recommender system. There are two main considerations for this.

Complexity. It is more complex for a recommender system to determine a satisfactory list of

recommendations for a group than it is for an individual (Cantador & Castells, 2012; Chen et

al., 2008; Masthoff, 2011; Popescu & Pu, 2011). This is because the preferences of each

system user in the group have to be considered as opposed to the preferences of a single

system user (Amer-Yahia et al., 2009; Berkovsky & Freyne, 2010; Carvalho & Madedo, 2013;

Gartrell et al., 2010; Garcia et al., 2012; Jameson & Smyth, 2007; Pera & Ng, 2012; Popescu

& Pu, 2011).

Social influences. Group recommender systems have to consider the social influences of

the group whereas individual recommender systems do not (Chen et al., 2008; Gartrell et al.,

2010; Jameson & Smyth, 2007, Quijano-Sanchez et al., 2013). An example of a social

influence would be the relationship strength between each system user in the group. These

social influences contribute to whether a group recommendation is satisfactory or not

(Baltrunas et al., 2010).

The purpose of this research is to propose a generic model for a group recommender system which

meets both of the aforementioned considerations through the application of trust and personality. The

application of personality caters for the complexity and social influences within the group. By defining

Page 19: PerTrust: leveraging personality and trust for group recommendations

Chapter 1 Introduction

4

the individual personalities of each system user in the group, a group recommender system can cater

for these by determining recommendations which are considerate of a user’s personality. For

example, assume that one system user has a demanding personality type and that another system

user has an easy going personality type. In this scenario, a satisfactory group recommendation is one

that satisfies the needs of the system user with the demanding personality type. The system user with

the easy going personality type is more likely to be satisfied with this recommendation as long as

everyone in the group is happy.

The application of trust caters for the complexity within the group. If a system user trusts the system

recommendations given by another system user in the group, then that system user is more likely to

agree with the recommendation. Therefore, trust is a measure of similarity (Golbeck, 2005). Moreover,

trust is a greater determinant of reliability (Golbeck, 2005; Quijano-Sanchez et al., 2013). Because

another system user is trusted, their recommendations become reliable. As a result, if trust can be

determined between system users in the group, then the highest rated recommendations attributed by

the most trusted system users can be considered as potential group recommendations.

1.2 Motivation

The motivation behind this research and the proposition of a relevant group recommender system

model is as per the below.

Open research area

Group recommendation is a more recent area of research (Cantador & Castells, 2012; Masthoff,

2011). Therefore, there are many open challenges and considerations (Cantador & Castells, 2012).

As a result, it is the motivation of this research to propose a group recommender model which seeks

to make a contribution to this research area by providing a plausible and reasonable solution to some

of these open challenges and considerations.

Application of personality

Most group recommender systems cater for the social influences of a group through aggregation.

Aggregation is the process whereby the individual preferences or recommendations of each system

user within the group are merged together to form a single list of preferences or recommendations for

the group (Jameson & Smyth, 2007). However, this approach does not often consider the specific

social influence of each individual system user (Chen et al., 2008; Quijano-Sanchez et al., 2013).

Therefore, it is the motivation of this research to propose a group recommender model which does

cater for these social influences through the implementation of personality.

Application of trust

Another limited area of research is the application of trust within group recommender systems. Trust

has been widely used and implemented within individual recommender systems. However, for group

Page 20: PerTrust: leveraging personality and trust for group recommendations

Chapter 1 Introduction

5

recommender systems, there are few systems which implement trust (Quijano-Sanchez et al., 2013).

Therefore, it is the motivation of this research to propose an implementation of trust within the

processes of group recommendation.

1.3 Research methodology

The research methodology applied in this dissertation is a hybrid approach of the experimental

research method and the model research method (Elio et al., n.d.; Freitas, 2009; Kothari, 2004).

The experimental research method is defined as a method consisting of two distinct phases (Elio et

al., n.d.). In the first phase, an exploratory approach is taken whereby a list of research questions is

defined for the given research at hand (Elio et al., n.d.). In the second phase, an experimental

approach is taken whereby the research questions are answered through an implementation or

experiment of some sort (Elio et al., n.d.; Kothari, 2004). The model research method is defined as a

method of research with the purpose of defining a model or prototype (Elio et al., n.d.). This model or

prototype can then be used as a basis for evaluating the proposed research (Elio et al., n.d.).

Therefore, in this dissertation, the research methodology adopted is to initially define a problem

statement with a list of research questions. The purpose of this problem statement is to justify the

proposed research. This follows the experimental step within the experimental research method.

Thereafter, the research questions are evaluated throughout the dissertation with the purpose of

deriving a model for implementation. This follows the second step of the experimental research

method, the experimental approach, as well as the model research method.

1.4 Problem statement

The main purpose of this research is to propose a generic model for a group recommender system.

The proposed model is entitled PerTrust since this research evaluates the impact of personality and

trust in ensuring group recommendations which are both satisfactory to and considerate of the social

influences within a group. It is the aim of the proposed group recommender model to address this

problem area by considering the following research questions.

1. What are the specific requirements of group recommender systems which have to be

considered?

In order to propose a group recommender model, a list of requirements is derived. This list of

requirements is identified through a literature overview of other proposed group recommender

system models. The identified set of requirements is then used a basis for evaluating the

proposed group recommender model.

2. How do current group recommendation models aim to meet these requirements?

Page 21: PerTrust: leveraging personality and trust for group recommendations

Chapter 1 Introduction

6

In order to propose a group recommender model, an overview needs to be done in order to

determine how other researchers have met the requirements for group recommender

systems. The results of this research are used as a basis for the proposed group

recommender model.

3. How does the PerTrust prototype meet the specific requirements of group

recommendation?

The proposed group recommender model meets the requirements of group recommendation

by considering the implementation of both personality and trust. However, this raises a

number of research questions to investigate.

a. How can trust and personality be implemented within the process of group

recommendation?

Trust and personality are relatively intangible concepts. Therefore, the

implementation for trust and personality within a group recommender system is

defined for the proposed model.

b. How can trust be determined between two system users who do not know one

another?

In trust-based recommender systems, there are often cases where a trust valuation

needs to be determined between two strangers. Consequently, an implementation

needs to be defined so that trust can be inferred.

c. What is the effect of trust and personality in the group recommendation

process?

In the proposed group recommender model, the impact of trust and personality is

individually and collectively determined to evaluate the effect of both in the final group

recommendation output by the model.

d. How can the satisfaction with a group recommendation be determined both

individually and collectively as a group?

In the proposed group recommender model, satisfaction is an important consideration

as a group recommendation needs to meet the preferences of each system user.

This research details how satisfaction can be calculated and implemented.

1.5 Important terms

In order to ensure that there is a common understanding in this dissertation, the following

terminologies used in this dissertation are defined. Therefore, the researcher now defines the

following terms: personality, similarity, trust, and trust network.

Personality

For this research, the personality of a system user follows the application of the Thomas-Kilman

Instrument (TKI) test, as implemented by Quijano-Sanchez et al. (2013). The TKI test is a multiple

choice, A or B, personality test used to determine a person’s personality type in a conflict scenario

Page 22: PerTrust: leveraging personality and trust for group recommendations

Chapter 1 Introduction

7

(Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009; Schaubhut, 2007). The result of this test

reflects one of five personality types: competing, collaborating, compromising, accommodating, and

avoiding CPP, Inc., 2009; Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009; Schaubhut, 2007).

Quijano-Sanchez et al. (2013) and Recio-Garcıa et al. (2009) extend these personality types such

that each personality type is represented by a numeric value.

Similarity

Similarity is defined as a measure of how similar two system users are. In this research, similarity is

determined based upon the how similar two users’ rating histories are (Ricci et al., 2011). Therefore,

similarity is determined by how similar the ratings attributed by both users are for the same items

(Ricci et al., 2011).

Trust

The researcher defines trust as the commitment by a user A to a specific action, within a specific

context, in the subjective belief that the future actions undertaken by another user B will result in a

good outcome, though accepts that negative consequences may otherwise occur (Golbeck, 2005).

Trust Network

A trust network is a social network structure in which the trust valuations linking system users are

visually presented (Victor, 2010).

1.6 Layout of dissertation

The layout of this dissertation consists of two parts, Part I and Part II. Part I is split into a further two

sections, Part I(a) and Part I(b). However, before the main parts of the dissertation are defined, a

background chapter on group recommender systems is presented in Chapter 2.

Chapter 2 presents a chapter on group recommender systems. The purpose of this chapter is

to provide a background to the topic of group recommender systems with the intention of

identifying a set of requirements for implementation in the proposed group recommender

model.

Part I(a) of this dissertation defines a trust implementation framework for the purposes of this

research. The defined trust implementation framework is used in the proposed model as the means

for representing and calculating trust. This is covered in Chapters 3 to 6.

Chapter 3 presents a background discussion on trust networks and trust networking

concepts. The purpose of this chapter is to assist with an understanding of trust networks.

The importance of this knowledge is that these concepts are used as a foundation in the

chapters on trust and group recommendation.

Page 23: PerTrust: leveraging personality and trust for group recommendations

Chapter 1 Introduction

8

Chapter 4 presents a background discussion on trust. This chapter and the two that follow

constitute the definition of a trust implementation framework for the proposed group

recommender model. This trust implementation framework is used as the basis for all trust-

based calculations. Therefore, the emphasis of this chapter is to begin the definition of this

framework by defining trust in recommender systems and identifying, from a high level

perspective, how trust can be calculated between strangers by leveraging trust networks.

Chapter 5 continues with the definition of a trust implementation framework by detailing the

numerous methods of implementing trust within recommender systems. Specifically, this

chapter evaluates a number of state-of-the-art algorithms for calculating trust as well as how

these algorithms can be applied in predicting a rating for a user for a specific item. The output

of this chapter is a suitable trust algorithm for use in the trust implementation framework.

However, this proposed algorithm is still to be evaluated empirically. This empirical evaluation

occurs in chapter 6.

Chapter 6 concludes the definition of a trust implementation framework for this research by

performing an empirical evaluation of all the state-of-the-art trust algorithms defined in chapter

5. The purpose of this evaluation is to determine a candidate trust and trust-based rating

prediction algorithm for the trust implementation framework to be adopted in the proposed

recommender system model.

Part I(b) of this dissertation describes a number of group recommendation processes to implement

the defined trust implementation framework. These group recommendation processes are defined in

Chapters 7 to 10.

Chapter 7 presents the first process of group recommendation: preference elicitation.

Preference elicitation is the group recommendation process used to determine a top-N list of

recommendations for each individual group member. Therefore, this chapter details a trust-

based methodology of deriving a list of top-N recommendations for each member of the

group.

Chapter 8 presents the second process of group recommendation, namely group-based

rating prediction. This chapter defines a personality implementation framework and details

how both the personality and trust implementation frameworks combine to cater for the social

influences and preferences of each system user in the group. Specifically, this chapter details

how the individual top-N lists of recommendations, formed in chapter 7, are affected with both

personality and trust to cater for the social influences of the group. The result of this chapter is

a list of top-N group recommendations affected with the personality and trust considerations

of the group.

Page 24: PerTrust: leveraging personality and trust for group recommendations

Chapter 1 Introduction

9

Chapter 9 presents the third process of group recommendation: aggregation. This chapter

details how a group recommendation is derived from the individual top-N recommendation

lists defined in chapter 8. As a result, a number of aggregation models are presented and

detailed in this chapter. While there is no conclusive aggregation model which fits all

scenarios, a number of observations are made.

Chapter 10 presents the fourth and last process of group recommendation: satisfaction. This

chapter reviews a number of algorithms to determine both individual and group satisfaction.

Both of these elements are important as they are indicators as to whether the system

generated group recommendation meets the needs of each individual user as well as the

group as a whole. The conclusion of this chapter is a nominated algorithm for measuring

individual satisfaction and a nominated algorithm for measuring group satisfaction.

Part II of this dissertation defines a personality and trust-based group recommender model, PerTrust.

This model incorporates the defined trust implementation framework and group recommendation

process results from Parts I(a) and I(b) of the dissertation. The model definition is covered in

Chapters 11 to 15.

Chapter 11 presents the PerTrust model architecture. This chapter defines each component

relevant for the PerTrust architecture in determining group recommendations.

Chapter 12 presents the PerTrust model. This chapter formally defines the PerTrust model

and its relevant components required for group recommendation. This model definition is

based on the PerTrust architecture as it formally defines the implementation of group

recommendation.

Chapter 13 presents an evaluation of the PerTrust model. This chapter presents an empirical

analysis of the PerTrust model in comparison to other group recommender models. The

intention of this chapter is to analyse the results of the evaluation and determine whether the

PerTrust model is a viable model for group recommendation.

Chapter 14 presents a prototype of the PerTrust model. This chapter details an online

prototype of the PerTrust model with all of the underlying technologies and practical

implementations presented.

Chapter 15 presents a conclusion to this dissertation. This chapter reviews the research

questions proposed and determines whether the proposed PerTrust model adequately

answers these. Thereafter, this research is summarised for the purposes of deriving final

Page 25: PerTrust: leveraging personality and trust for group recommendations

Chapter 1 Introduction

10

conclusions, research contributions, research limitations, and identifying further potential

research areas.

The layout of this dissertation is given in Figure 1.1 below.

Chapter 1

Introduction

Chapter 2

Group recommender systems

Chapter 9

Group recommendation:

aggregation

Chapter 11

Introducing PerTrust – a

personality and trust-based

group recommender system

Chapter 4

A background to trust in

recommender systems

Chapter 5

Trust in recommender systems

Chapter 6

Empirical evaluation of trust-

based algorithms

Chapter 12

The PerTrust model

Chapter 13

PerTrust evaluation

Chapter 14

PerTrust prototype

Chapter 3

Trust network concepts

Chapter 8

Group recommendation: rating

prediction

Chapter 15

Conclusion

Chapter 7

Group recommendation:

preference elicitation

Chapter 10

Group recommendation:

satisfaction

Pa

rt I

(a)

– T

rus

t

imp

lem

en

tati

on

fra

me

wo

rk

Pa

rt I

(b)

– G

rou

p

rec

om

me

nd

ati

on

pro

ce

ss

de

fin

itio

n

Pa

rt II

– T

he

Pe

rTru

st

mo

de

l

Figure 1.1: Dissertation layout

Page 26: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

Page 27: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

12

2.1 Introduction

Group recommender systems are a category of recommender system used to generate

recommendations to a group of system users. The other category of recommender system is

individual recommender systems. These systems generate recommendations to single system users

(Chen et al., 2008; Popescu & Pu, 2011). While much research has been done in the field of

individual recommender systems, group recommender systems present a more novel area of

research (Cantador & Castells, 2012; Herr et al., 2012; Masthoff, 2011).

Group recommender systems are relevant in situations where multiple system users come together

over a common activity or event (Cantador & Castells, 2012). Examples of where group

recommendation could apply is when going out with friends to a movie and trying to agree on what

movie to watch or intending to visit a tourist attraction with family while on holiday and trying to come

to an agreed location. In such scenarios, it is the purpose of the group recommender system to

provide decision support by considering the preferences of each system user in the group and aiding

them in coming to a final decision.

While aiding a group of system users in coming to a consensus over a decision is the purpose of the

group recommender system, it is a complex task. There are two main reasons for this.

Each system user in the group has their own personalities and personal preferences which

need to be collectively considered for a group recommendation (Amer-Yahia et al., 2009;

Berkovsky & Freyne, 2010; Carvalho & Madedo, 2013; Gartrell et al., 2010; Garcia et al.,

2012; Jameson & Smyth, 2007; Pera & Ng, 2012).

The group recommendation determined by the system needs to simultaneously satisfy the

needs of the individual as well as the group as a whole (Amer-Yahia et al., 2009; Baltrunas et

al., 2010; Carvalho & Madedo, 2013; Gartrell et al., 2010; Garcia et al., 2012; Jameson &

Smyth, 2007; Pera & Ng, 2012; Salamó et al., 2012).

The purpose of this chapter is to provide a background to the topic of group recommender systems

with the intention of identifying a set of requirements for the group recommender model proposed in

this research so that these two complexities can be catered for.

In order to identify a set of group recommender system requirements for this research, the chapter is

structured as follows. The first section provides a formal definition of both recommender and group

recommender systems. Thereafter, a recommender system type is motivated for the purposes of a

group recommender system. Next, the topic of group recommendation is defined with the various

processes of group recommendation discussed. Following this is a presentation of all work related to

the area of trust-based group recommender systems. This related work results in the definition of a

list of requirements for a group recommender system. In the last section, the chapter is concluded.

Page 28: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

13

2.2 Defining a group recommender system

A formal definition of a group recommender system is now provided. However, since a group

recommender system is a category of recommender system, a definition of a recommender system is

required first. Both of these definitions are formally presented below.

2.2.1 Recommender system

Within literature, there is a commonly held understanding and definition for a recommender system,

formally presented in Definition 2.1 below.

An important consideration to note within Definition 2.1 is that recommendation item suggestions are

personalised. A personalised recommendation item is one that meets the particular preferences and

needs of one or more system users. The implementation of personalisation within a recommender

system is detailed further.

2.2.2 Group recommender system

Group recommender systems are defined as a type of recommender system which generates

recommendations for a group of people by assimilating the preferences of each individual member

within the group (Garcia et al., 2012; Herr et al., 2012; Kim et al., 2010; Popescu & Pu, 2011). The

purpose of this is to ensure that the recommendation returned by the system satisfies each individual

group member, as far as possible (Garcia et al., 2012; Kim et al., 2010; Popescu & Pu, 2011). This is

formally defined in Definition 2.2 below.

Definition 2.1: Recommender system

A recommender system is defined as a system which assists one or more system user(s) in

making a final decision with regards to the selection of a specific recommendation item. This is

done by providing a set of personalised and relevant recommendation item suggestions to the

system user(s) (Massa & Avesani, 2009; Ricci et al., 2011; Victor, 2010).

Definition 2.2: Group recommender system

A group recommender system is defined as a category of recommender system which assists a

group of system users in coming to a collective consensus as to the selection of a specific

recommendation item. This is done by providing a set of satisfactory and relevant group

recommendation item suggestions which consider the personal preferences and social

influences of each system user in the group (Garcia et al., 2012; Kim et al., 2010; Popescu & Pu,

2011).

Page 29: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

14

Definition 2.2 provides a number of important considerations with regards to group recommender

systems.

Consensus. A group recommender system is to assist a group of system users in coming to

a final agreement with regards to the selection of a specific item (Cantador & Castells, 2012;

Salamó et al., 2012).

Satisfaction. The group recommendations generated by the group recommender system

should satisfy the preferences and needs of each individual system user (Chen et al., 2008;

Garcia et al., 2012; Popescu & Pu, 2011).

Personal preferences and social influences. The group recommender system must cater

for the personal preferences of each system user as well as the social influences of each

system user in the group (Amer-Yahia et al., 2009; Carvalho & Madedo, 2013; Gartrell et al.,

2010; Garcia et al., 2012; Jameson & Smyth, 2007; Pera & Ng, 2012).

In the section following, the group recommender system definition is further detailed by defining and

motivating the type of recommender system to be implemented in the proposed recommender system

model.

2.3 Types of recommender systems

Within both individual and group recommender systems, there are three main types: collaborative

filtering-based, content-based, and trust-based recommender systems. These recommender system

types present a high level method and framework for deriving recommendations. Therefore, once

these types have been defined, a group recommender system type is motivated for this research.

2.3.1 Content-based recommender systems

Content-based recommender systems determine recommendations based upon the recommendation

items previously rated by a system user (Al Falahi et al., 2012; Bhuiyan, 2011; Chen et al., 2008;

Lops et al., 2011; O’Doherty, 2012; Quan & Hinze, 2008; Victor, 2010). Therefore, in order to

determine a recommendation, the system would analyse these previously rated recommendation

items and return a list of recommendation items similar to those already rated by the user (Al Falahi et

al., 2012; Bhuiyan, 2011; Chen et al., 2008; O’Doherty, 2012; Victor, 2010). For example, if a user

has visited a museum detailing South African history, then the recommendation determined by the

system for that user would return a list of all museums specialising in South African history (Victor,

2010).

2.3.2 Collaborative filtering-based recommender systems

The most commonly implemented recommender systems are collaborative filtering-based

recommender systems (Bhuiyan, 2011; Chen et al., 2008; Lops et al., 2011; O’Doherty, 2012; Victor,

Page 30: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

15

2010). This recommender system works by returning those recommendations which have been

attributed high rating scores by similar system users (Al Falahi et al., 2012; Bhuiyan, 2011; Chen et

al., 2008; O’Doherty, 2012; Quan & Hinze, 2008; Victor, 2010).

In collaborative filtering-based recommender systems, similarity is often based upon how similarly

matched the rating histories of two system users are for the same set of recommendation items

(Bhuiyan, 2011; Quan & Hinze, 2008). If there is little to no difference, then it is deemed that two

users are similar. If there is a large difference, then the system determines that the two users are

dissimilar (Bhuiyan, 2011).

2.3.3 Trust-based recommender systems

Trust-based recommender systems work similarly to collaborative filtering-based recommender

systems from a high level perspective. The key difference between these types, however, is that trust

is used as the basis for a system recommendation instead of similarity (O’Doherty, 2012; Victor, 2010

Victor et al., 2011). Therefore, the recommendation items returned by the system are based on how

much one system user trusts the rating valuations attributed to a set of recommendation items by

another system user (Victor, 2010; Victor et al., 2011).

2.3.4 Motivating a type of recommender system

A recommender system type is now motivated for this research based on an evaluation of the

advantages and disadvantages of each recommender system type. A table summarising each of

these recommender system types as well as their corresponding advantages and disadvantages is

presented in Table 2.1 below. Thereafter, each recommender system type is evaluated in greater

detail.

Recommender system type

Description Advantages Disadvantages

Content-based Determines

recommendations

based upon the

recommendation

items previously

rated by a system

user (Al Falahi et al.,

2012; Bhuiyan, 2011;

Chen et al., 2008;

Lops et al., 2011;

O’Doherty, 2012;

Recommendation items

can be determined

irrespective of whether

they have been

experienced and

engaged with or not by

other system users

(Lops et al., 2011).

Items limited to those

already rated by the

system user (Al Falahi

et al., 2012; Lops et al.,

2011; O’Doherty, 2012;

Quan & Hinze, 2008;

Victor, 2010).

Cold start problem

(Bhuiyan, 2011; Lops et

al., 2011; Quan &

Hinze, 2008).

Page 31: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

16

Quan & Hinze, 2008;

Victor, 2010).

Collaborative

filtering-based

Determines

recommendations

which have been

attributed high rating

scores by similar

system users (Al

Falahi et al., 2012;

Bhuiyan, 2011; Chen

et al., 2008;

O’Doherty, 2012;

Quan & Hinze, 2008;

Victor, 2010).

Greater diversity in the

set of recommendation

items returned (Victor,

2010).

Caters for the drawback

of the content-based

recommender system

(Bhuiyan, 2011).

Cold start problem

(Massa & Avesani,

2009; Quan & Hinze,

2008).

Reliability of few rated

items (Massa &

Avesani, 2009).

Trust-based Determines

recommendations

based on how much

one system user

trusts the rating

valuations attributed

to a set of

recommendation

items by another

system user

(O’Doherty, 2012;

Victor, 2010 Victor et

al., 2011).

Trust is a personal

measure of relationship

(Bhuiyan, 2011;

Golbeck, 2004;

Johnson et al., 2011;

Massa & Avesani,

2007a).

Assists in overcoming

the cold start problem

(Massa & Avesani,

2007a; Massa &

Avesani, 2009; Quan &

Hinze, 2008).

Ensures a large number

and diversity of

recommendation items.

Cold start issue

Table 2.1: Advantages and disadvantages of recommender system types

a) Evaluating content-based recommender systems

In evaluating a content-based recommender system, the following advantages and

disadvantages are identified.

Advantage: Recommendation items can be determined irrespective of whether they

have been experienced and engaged with or not by other system users (Lops et al.,

2011). The reason for this is that the recommendation items returned by the system are

Page 32: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

17

based on the recommendation item and not on how that item has been evaluated by

other system users (Lops et al., 2011).

Disadvantage: Recommendation items are only limited to those types of

recommendation items already rated by a system user (Al Falahi et al., 2012; Lops

et al., 2011; O’Doherty, 2012; Quan & Hinze, 2008; Victor, 2010). Therefore,

recommendation items which may potentially interest and meet the personal preferences

of the system user will never be returned (Lops et al., 2011; Victor, 2010).

Disadvantage: Cold start problem (Bhuiyan, 2011; Lops et al., 2011; Quan & Hinze,

2008). The cold start problem applies to new system users and new recommendation

items (Bhuiyan, 2011). The cold start problem states that new recommendation items

cannot be recommended to system users as they have not yet been rated (Bhuiyan,

2011; Quan & Hinze, 2008). Additionally, the problem is extended to state that new users

cannot have recommendations determined for them as they have not yet rated any items

(Bhuiyan, 2011; Quan & Hinze, 2008). While content-based recommender systems will

not suffer from the cold start problem for new items, they will struggle with the new user

problem as recommendations are only based on those items already rated by the system

user (Bhuiyan, 2011; Lops et al., 2011; Quan & Hinze, 2008).

b) Evaluating collaborative filtering-based recommender systems

In evaluating a collaborative filtering-based recommender system, the following advantages

and disadvantages are identified.

Advantage: There is a greater diversity in the set of recommendation items

returned (Victor, 2010). The reason for this is that recommendation items are calculated

based upon those system users deemed to be similar (Victor, 2010). Consequently, these

similar system users may have rated recommendation items of potential interest to the

system user requesting a recommendation. As a result, these may be returned as

potential recommendation items to the system user (Victor, 2010).

Advantage: Caters for the drawback of the content-based recommender system

(Bhuiyan, 2011). By making use of a collaborative filtering-based recommender system

over a content-based recommender system, a recommender system user has a wider

range and more diverse set of possible recommendation items generated for them

(Bhuiyan, 2011). The reason for this is because recommendation items are independent

of previously rated items, but rather based on system user similarity (Bhuiyan, 2011).

Disadvantage: Cold start problem (Massa & Avesani, 2009; Quan & Hinze, 2008).

Collaborative filtering-based recommender systems also suffer from the cold start

problem. If there are new recommendation items or new users, it is not possible to

calculate similarity between users until a number of ratings have been attributed by the

system user or a number of ratings have been attributed to the new recommendation item

(Massa & Avesani, 2009; Quan & Hinze, 2008). As a result these recommender system

Page 33: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

18

types cannot return recommendations in such circumstances (Massa & Avesani, 2009;

Quan & Hinze, 2008).

Disadvantage: Reliability of few rated items (Massa & Avesani, 2009). The reliability

of a similarity score is compromised if similarity has only been determined between two

system users based on one or two commonly rated items (Massa & Avesani, 2009).

Similarly, how reliable is a recommendation item if it has only been attributed one or two

rating scores? Again, this drawback compromises the results output by the collaborative

filtering recommender system.

c) Evaluating trust-based recommender systems

In evaluating a trust-based recommender system, the following advantages and

disadvantages are identified.

Advantage: Trust is a personal measure of relationship (Bhuiyan, 2011; Golbeck,

2004; Johnson et al., 2011; Massa & Avesani, 2007a). This is an advantage as a trust

valuation assists to ensure that a generated recommendation is personalised (Bhuiyan,

2011; Golbeck, 2005; Johnson et al., 2011; Massa & Avesani, 2007a). The reason for this

is that trust is an indicator of similarity (Bhuiyan, 2011; Golbeck, 2005; Quan & Hinze,

2008). For example, the more one trusts someone, the more likely one agrees with their

rating of an evaluation. Consequently, by leveraging trust, a recommender system can

personalise recommendations.

Advantage: Assists in overcoming the cold start problem (Massa & Avesani, 2007a;

Massa & Avesani, 2009; Quan & Hinze, 2008). Just by having a system user issue a

single trust statement, it is possible that many other system users become reachable for a

recommendation (Massa & Avesani, 2007a; Massa & Avesani, 2009; Quan & Hinze,

2008). For example, assume a system user explicitly trusts one other system user. This

other system user may in turn trust ten other system users and those other system users

may each trust ten other system users, and so on. Therefore, trust makes it easier to

return a recommendation even if a user has not rated many items or issued many trust

statements (Massa & Avesani, 2007a; Quan & Hinze, 2008).

Advantage: Ensures a large number and diversity of recommendation items. The

reason for this is because trust is so far reaching that many system users can potentially

be queried. Additionally, these recommendations immediately become more reliable

because of trust. If a system user knows another system user and has attributed a trust

rating to them, a system recommendation item immediately becomes more reliable than if

the system had calculated a recommendation based on similarity (Massa & Avesani,

2009).

Disadvantage: Cold start issue. As with the other recommender system types, trust-

based recommender systems also have to contend with the cold start issue. If a system

user has not issued a trust statement or if a recommendation item has not been rated,

then a recommendation cannot be determined. However, it is noted that trust-based

Page 34: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

19

recommender systems will suffer less from this problem in comparison to the other

recommender system types. This is because the issuing of a single trust statement can

result in a greater possibility of a system recommendation being determined for a system

user (Massa & Avesani, 2007a; Massa & Avesani, 2009; Quan & Hinze, 2008).

Based upon the evaluation done for each recommender system type, the type of recommender

system chosen for this research is the trust-based recommender system. There are two main

motivations for the selection of this type of recommender system.

The trust-based recommender system caters best for the cold start problem as well as

the issues related to this problem. Through the application of trust, a single trust statement

issued by a system user results in a greater likelihood that a recommendation will be

determined for them (Massa & Avesani, 2007a; Massa & Avesani, 2009; Quan & Hinze,

2008). When considering the other recommender system types, a system user would have to

rate numerous recommendation items before a reliable and trustworthy recommendation item

could be determined for them.

The inherent advantages of trust. As stated in the evaluation, trust is a more personal,

reliable, and understandable measure than that of similarity.

Now that a type of group recommender system has been motivated, the next section discusses the

topic of group recommendation as well as its relevant sub-processes. This section details, on a high-

level, how group recommendation is implemented within a trust-based group recommender system.

2.4 Group recommendation

This section focuses on group recommendation. A high level evaluation gives an indication on how

group recommendations are generated for the purposes of defining a group recommender model for

this research. The foundational structure of this section is based upon the work of Jameson and

Smyth (2007) and their paper on group recommendation. In this paper, Jameson and Smyth (2007)

define four generic sub-processes within the main task of group recommendation. Another one has

additionally been added for the purposes of this research, namely group formation. These five sub-

processes, therefore, are as follows.

Group formation. Group formation is defined as the manner in which a group is formed.

Preference elicitation. Preference elicitation, within a group context, is defined as the

acquiring of individual preferences from each group member. These preferences are then

forwarded to be aggregated and used as the basis for a group recommendation (Jameson &

Smyth, 2007, Salamó et al., 2012).

Recommendation aggregation. Recommendation aggregation is defined as the method by

which a group recommender system processes the preferences or recommendations of

individual group members to form a final group recommendation (Jameson & Smyth, 2007).

The purpose of this method is to evaluate whether the final, aggregated group

Page 35: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

20

recommendation is both satisfactory and pertinent to the group as a whole (Bourke et al.,

2011; Jameson & Smyth, 2007).

Explanation. Explanation is defined as the process followed by a group recommender

system in relaying to the group members how the final group recommendation was derived

(Jameson & Smyth, 2007).

Group consensus. Group consensus is defined as the process followed by the group

recommender system to aid the group in reaching a consensus on the selection of a group

recommendation (Cantador & Castells, 2012; Salamó et al., 2012).

Each of these group recommendation sub-processes are presented diagrammatically for the

purposes of easy reference in Figure 2.1 below.

2. Preference elicitation 3. Recommendation aggregation 4. Explanation 5. Group consensus

RE

CO

MM

EN

DA

TIO

N

EX

PL

AN

AT

ION

1. Group formation

Figure 2.1: Group recommendation process at a high level

While each sub-process within group recommendation is important, the main focus of the sections

and chapters following in this research are based upon the sub-processes of group formation,

preference elicitation, and recommendation aggregation. The reason for these areas of focus,

particularly preference elicitation and aggregation, is that the purpose of this research is to evaluate

the effect of using trust and personality in the process of group recommendation.

Therefore, this analysis begins in the next three sections to introduce the topics of group formation,

preference elicitation, and aggregation.

2.4.1 Group formation

In the group recommendation process, groups of system users are typically formed in one of two

generic ways: explicitly or implicitly (Cantador & Castells, 2012; Gartrell et al., 2010). Explicit groups

are created by having a group administrator intentionally, manually, and explicitly adding system

group members to a list for a particular group recommendation (Cantador & Castells, 2012). Once

each member in the group has been added to the list, a group is said to have been formed.

Alternatively, a group can also be formed implicitly through location sensing such as Bluetooth. All

individuals within a specified distance would then be considered as part of the group (Cantador &

Castells, 2012; Hallberg et al., 2007).

Page 36: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

21

For the purposes of this research, groups are explicitly formed. The main motivation for this is that it

gives the group administrator greater control to determine which group members should be

considered for a group recommendation.

The type of group explicitly formed needs to now be considered. In their research papers on group

recommendation, Boratto and Carta (2011) and Hallberg et al. (2007) each define a generic set of

group types. These group types are defined below.

a) Types of groups as defined by Boratto and Carta (2011)

Established group. This is a group consisting of a number of people who consistently

gather and choose to be together because of a common interest held over a long period

of time (Boratto & Carta, 2011).

Occasional group. This is a group consisting of a number of people who come together

over a commonly held interest, but only for a defined period of time (Boratto & Carta,

2011).

Random group. This is a group consisting of a number of people who come together, but

not out of any commonly held interest (Boratto & Carta, 2011).

Automatically identified group. This is a group consisting of a number of people who

have been put together based upon their personal preferences and resources (Boratto &

Carta, 2011).

b) Types of groups as defined by Hallberg et al. (2007)

A private group. These are groups formed by a single group member, with only that

group member aware of who is in the group. The participants of the group are thus

unaware of each other. Moreover, such a group is not visible to or searchable by anyone

else (Hallberg et al., 2007).

A protected group. These are groups where each person in the group knows one other

and each group member is able to invite other system users. However, one can only join

the group if they are invited by another group member. Additionally, these groups are not

searchable by anyone else (Hallberg et al., 2007).

A public group. Such a group is formed by a single administrator group member with

other system users being able to join at any one time. This group is searchable by other

group members with any group member having the potential to be given administrator

privileges (Hallberg et al., 2007).

Based on these group type definitions, the type of group explicitly formed in this research is that of an

established and occasional group, as defined by Boratto and Carta (2011), and a protected group, as

defined by Hallberg et al. (2007). The motivation for an established and occasional group is that both

of these types of groups gather round a common purpose and are explicitly part of the group for that

defined purpose. The group may break up or stay together thereafter, but for that specific period of

Page 37: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

22

time, they are part of the group. The motivation for a protected group is that group activities are

commonly done with people who, to some extent, know one another. Therefore, it is not necessarily

open for any other person to join, unless they are explicitly invited by a trusted member of the group.

This ensures the safety and protection of those within the group.

In conclusion, for this research, groups are explicitly formed and the types of groups formed are

established and occasional groups, as defined by Boratto and Carta (2011), and protected groups, as

defined by Hallberg et al. (2007).

2.4.2 Preference elicitation

Preference elicitation is the process whereby the group recommender system derives an individual list

of preferences or recommendations for each group member (Jameson & Smyth, 2007, Salamó et al.,

2012). Typically, this can be done in either an explicit or implicit manner (Jameson & Smyth, 2007;

Popescu & Pu, 2011; Salamó et al., 2012).

With implicit preference elicitation, there is minimal group member input (Jameson & Smyth, 2007).

This method typically involves the group recommender system analysing the habits of a particular

group member and then processing the results of this analysis into a group member profile (Jameson

& Smyth, 2007).

In the explicit method of preference elicitation, system group members are typically required to

register with the system and explicitly attribute the relevant rating information (Jameson & Smyth,

2007). Typical examples of this are rating a number of items or defining explicit social relationships

(Jameson & Smyth, 2007).

For this research, an explicit methodology is followed because of the implementation chosen whereby

trust is used as a basis for generating individual recommendations for group members. Because of

this, a system group member is required to explicitly define social relationships.

2.4.3 Recommendation aggregation

Within the process of recommendation aggregation, there are two basic, general implementation

methodologies (Amer-Yahia et al., 2009; Baltrunas et al., 2010; Berkovsky & Freyne, 2010; Cantador

& Castells, 2012; Carvalho & Madedo, 2013; Garcia et al, 2012; Gartrell et al., 2010; Jameson &

Smyth, 2007; Kim et al., 2010; Masthoff, 2011; Pera & Ng, 2012; Salamó et al., 2012).

Aggregating each group member’s profile.

Aggregating each group member’s top-N recommendation list.

Page 38: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

23

For the purposes of brevity, the first method is entitled profile aggregation and the second method,

top-N aggregation. The purpose of this section is to discuss both of these aggregation methodologies

and motivate which one best meets the needs of the proposed group recommender model.

a) Profile aggregation

Profile aggregation occurs when a group recommender system takes the individual group

member profile from each group member, containing all of their preferences, and merges

each into a single group profile (Amer-Yahia et al., 2009; Baltrunas et al., 2010; Berkovsky &

Freyne, 2010; Cantador & Castells, 2012; Carvalho & Madedo, 2013; Garcia et al, 2012;

Gartrell et al., 2010; Jameson & Smyth, 2007; Kim et al., 2010; Masthoff, 2011; Pera & Ng,

2012; Salamó et al., 2012). This group profile is then used as the basis to determine

recommendations for the group as a whole (Amer-Yahia et al., 2009; Baltrunas et al., 2010;

Berkovsky & Freyne, 2010; Cantador & Castells, 2012; Garcia et al, 2012; Jameson & Smyth,

2007; Kim et al., 2010; Masthoff, 2011; Pera & Ng, 2012).

The advantage of profile aggregation is that in generating a group profile, a group is more

likely to have recommendations which are appropriate and satisfactory for the group as a

whole (Kim et al., 2010). Secondly, it is easier for a group to agree upon the various factors of

a model than to deliberate over individual items (Jameson & Smyth, 2007). However, the

disadvantage with this approach is that there is a chance that not all of the members of the

group will be entirely satisfied, as there may be some individual group member’s preferences

which are not considered in the group profile (Kim et al., 2010).

b) Top-N aggregation

In the top-N aggregation approach, a list of top-N recommendations is generated for each

group member (Amer-Yahia et al., 2009; Baltrunas et al., 2010; Berkovsky & Freyne, 2010;

Cantador & Castells, 2012; Carvalho & Madedo, 2013; Chen et al., 2008; Garcia et al., 2012;

Gartrell et al., 2010; Jameson & Smyth, 2007; Kim et al., 2010; Masthoff, 2011; Pera & Ng,

2012; Salamó et al., 2012). This is commonly done by making use of collaborative filtering

(Baltrunas et al., 2010; Cantador & Castells, 2012; Kim et al., 2010; Masthoff, 2011; Pera &

Ng, 2012). Once a list of top-N recommendations has been determined for each group

member, the group recommender merges this top-N list into a single top-N group list of

recommendations for the group (Amer-Yahia et al., 2009; Baltrunas et al., 2010; Berkovsky &

Freyne, 2010; Cantador & Castells, 2012; Carvalho & Madedo, 2013; Garcia et al, 2012;

Gartrell et al., 2010; Jameson & Smyth, 2007; Kim et al., 2010; Masthoff, 2011).

The advantage of top-N recommendation is that the process of aggregation is transparent

and easy to explain. This allows group members to see how their individual preferences have

been considered in the group recommendation (Kim et al., 2010). In addition, it also aids the

group when trying to come to some sort of consensus as to which recommendation should be

Page 39: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

24

selected as it is easy for the system to compare the group recommendation with the individual

recommendation list of each group member (Kim et al., 2010). Lastly, this methodology is

both dynamic and flexible in that it is able to cater for the needs of different groups (Amer-

Yahia et al., 2009; Gartrell et al., 2010). The disadvantage of this approach, however, is that

the system is less likely to identify as diverse a set of recommendations as the profile

aggregation approach (Kim et al., 2010). Additionally, it is quite a time consuming process for

large groups (Jameson & Smyth, 2007; Kim et al., 2010).

c) Motivating a top-N aggregation methodology

For this research, a top-N recommendation aggregation model is adopted. The motivation for

this is twofold.

In order for trust to be leveraged and to affect the group recommendation, it is

best and more apt for the system to make use of top-N recommendation

aggregation. By determining a top-N recommendation list for each group member

using trust and then merging that list again with trust and personality as factors, the

influence of trust as well as personality become a major factor in determining

recommendations which are reliable and more likely to satisfy each group member.

The second motivation merely offsets the disadvantage of this approach in that

the proposed model aims to address small to medium groups who decide to

group together. Therefore, the requirement for processing calculations for large

groups is negated.

Up to this point, a group recommender system has been formally defined, a recommender system

type has been motivated, and the process of group recommendation has been explained at a high

level with a number of group recommendation sub-processes introduced and motivated. These

background discussions on group recommender systems lead to a set of requirements being defined

for a group recommender system. The derivation of these requirements begins in the next section

where a literature overview occurs. The result of this overview provides the basis for a list of group

recommender system requirements.

2.5 Related work

The purpose of this section is to evaluate relevant research in the areas of both group

recommendation and trust-based recommendation. The reason for this evaluation is to use this

research as a basis for a list of requirements to be determined for a trust-based group recommender

system. Therefore, this section is split into the two main areas of related research. The first

subsection comprises of a literature review on group recommender systems and the second

subsection considers the research within trust-based group recommender systems.

Page 40: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

25

2.5.1 Group recommender systems

In this section, a number of group recommender systems are reviewed, namely: HappyMovie (Herr et

al., 2012; Quijano-Sanchez et al., 2011), Masthoff’s (2004) television group recommender system,

and Gartrell et al.’s (2010) movie group recommender system. Each of these group recommender

systems are reviewed below.

a) HappyMovie (Herr et al., 2012; Quijano-Sanchez et al., 2011)

HappyMovie is an online Facebook application which enables the group recommendation of

movies through the implementation of both trust and personality (Quijano-Sanchez et al.,

2011). It is one of the few group recommender systems identified, where the use of trust and

personality is combined for the purposes of group recommendation. Therefore, there are two

parts to this group recommender implementation: trust and personality, both of which are

detailed below

Trust. In this implementation, trust is not explicitly determined or attributed by users

of the system. Instead, trust is calculated based on ten Facebook-driven factors.

Examples of these factors are the number of friends in common and the duration of

the relationship (Quijano-Sanchez et al., 2011).

Personality. Personality is implemented by making use of a movie metaphor. This

metaphor contains two different movie characters, representing two opposite

personality types. In this approach, the system user is required to select the character

they identify most closely with as an indicator of their own personality (Quijano-

Sanchez et al., 2011).

b) Masthoff’s television group recommender system (Masthoff, 2004)

In Masthoff’s (2004) television group recommender system, a sequence of television

programs is selected to cater for a group of people watching television. The main contribution

of this research is how a satisfactory group recommendation can be determined by catering

for the psychological impact of being in a group. Two of the main psychological factors

identified are that of emotional contagion and conformity (Masthoff, 2004).

Emotional contagion. Emotional contagion is the factor that states that a group

member’s satisfaction is affected by the satisfaction levels of the rest of the group

(Masthoff, 2004).

Conformity. Conformity states that a group member’s opinion of a recommendation

item is influenced by the opinion of the other group members. The reason for this is

so a group member would not feel left out (Masthoff, 2004).

Another contribution by Masthoff (2004) is the analysis of various group recommendation

aggregation models and determining the most intuitive one for group recommendation. In this

analysis, 11 aggregation methodologies were proposed, where users were queried as to

Page 41: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

26

which seemed the most intuitive to them. The outcome was that fairness and the

consideration of other group members’ needs are important when performing group

recommendation (Masthoff, 2004).

c) Gartrell et al.’s movie group recommender system (Gartrell et al., 2010; Herr et al.,

2012)

Gartrell et al. (2010) developed a group recommender system for movies. The focus of this

research was to determine an adequate consensus function for a satisfactory group

recommendation. One of the main elements of this consensus function focused on the

influence of relationships and how the closeness of these relationships affects the outcome of

a group recommendation.

The result of this research nominated specific aggregation methods based upon the

closeness of a relationship. Therefore, it was identified that a specific aggregation method can

be applied based upon the strength of the relationships between system users in the group

(Herr et al., 2012). As per Masthoff’s (2004) group recommender system, Gartrell et al.’s

(2010) research indicates the importance of considering the social make up of a group.

2.5.2 Trust-based recommender systems

The trust-based recommender systems reviewed are: FilmTrust (Golbeck & Hendler, 2006),

Moleskiing (Avesani et al., 2005), and the EnsembleTrustCF trust algorithm (Victor, 2010).

a) FilmTrust (Golbeck & Hendler, 2006)

FilmTrust is a trust-based movie recommender system developed by Golbeck and Hendler

(2006). This recommender system calculates movie recommendations by making use of the

trust valuations between system users. Therefore, the basis of this trust-based recommender

system evaluates the impact that the trust level and recommendation item rating have on a

system user’s recommendation.

The main contribution of the FilmTrust system is the implementation of the TidalTrust

algorithm, developed by Golbeck (2005). This trust algorithm has been foundational in the

calculation and inference of trust between two system users who have no explicit trust

valuation between them. Therefore, through the application of the TidalTrust algorithm, it

became possible to generate trust-based movie recommendation items between system

users who are strangers to one another. In addition, it was shown that this trust-based

recommendation algorithm improves the performance of the standard and popularly

implemented collaborative filtering algorithm in the FilmTrust system (Golbeck & Hendler,

2006).

Page 42: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

27

b) Moleskiing (Avesani et al., 2005)

Another recommender system which applies a foundational trust algorithm is the Moleskiing

system developed by Avesani, et al. (2005). This trust-based recommender system is a blog

based system for skiers. It allows skiers to view the blogs posted by trustworthy members

regarding which routes are safe for skiing.

The main contribution of the Moleskiing system is the implementation of the MoleTrust

algorithm (2005). This trust-based recommendation algorithm built upon the work of

Golbeck’s TidalTrust algorithm (2005) by adding a few important enhancements.

A predefined trust horizon value to ensure the quality and reliability of a trust

value. This was done by ensuring that trust is never calculated beyond a pre-

specified number of people. For example, if the threshold value was set to two, then

trust would only be calculated between the friend of a friend, but not the friend-of-a-

friend-of-a-friend. The reason for this is because it was determined that the more

users that are required to calculate trust, the less reliable and trustworthy that trust

value becomes (Avesani et al., 2005).

A predefined minimum trust value threshold to only consider a system user’s

rating if their trust relationship with a user exceeds this threshold value. Again,

this measure ensures the reliability of trust when a recommendation is being

determined (Avesani et al., 2005).

c) EnsembleTrustCF (Victor, 2010)

The last research reviewed is that of the EnsembleTrustCF algorithm, developed by Victor

(2010). This is a proposed trust-based recommendation algorithm as opposed to a

recommender system. However, it is relevant as it builds upon the research of both Golbeck

(2005) and Avesani et al. (2005).

In this research, Victor (2010) developed a trust-based recommendation algorithm,

EnsembleTrustCF, which makes use of the strengths of both trust and collaborative filtering in

determining a recommendation. Therefore, this trust-based algorithm makes every effort to

determine a system recommendation. The conclusion of this research found that a

combination of the strengths of both algorithms results in an improved performance when

compared against the TidalTrust (Golbeck, 2005) and MoleTrust (Avesani et al., 2005)

algorithms.

In this section, all related research with regards to group recommendation and trust-based

recommendation was presented. In the next section, this related research is used as the basis for a

number of requirements for this implementation of trust-based group recommender system.

Page 43: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

28

2.6 Requirements for a trust-based group recommender system

In this section, the related research presented in the previous section is used as a basis to determine

a set of requirements for the proposed trust-based group recommender model. These requirements

are used as a basis for evaluation, as well as a guideline for design decisions in the rest of this

research.

The requirements for the proposed trust-based group recommender model are as follows:

Satisfaction. The satisfaction requirement has the aim of ensuring that, within the group

recommender system, each individual group member is content with the group

recommendation list returned (Chen et al., 2008; Garcia et al., 2012; Popescu & Pu, 2011).

By implication, this means that there needs to be a way to determine the satisfaction of the

individual group member as well as the group as a whole (Carvalho & Madedo, 2013;

Jameson & Smyth, 2007). The necessity of this requirement is highlighted in each group

recommender system reviewed since it is a foundational requirement for a recommender

system.

Implementation of social factors. This requirement is derived from the HappyMovie

Facebook application (Quijano-Sanchez et al., 2011), Masthoff’s (2004) television group

recommender system, and Gartrell’s (2010) movie group recommender system. In each of

these group recommender systems, the social factors within a group are considered in

generating a group recommendation. By considering social factors, it ensures that, not only

are the personal preferences and needs of each group member met, but also that the social

makeup and the heterogeneous personalities which make up a group are appropriately

considered. Additionally, the consideration of social factors also ensures the satisfaction of

each group member as well as the group as a whole (Cantador & Castells, 2012; Chen et al.,

2008; Gartrell et al., 2010; Quijano-Sanchez et al., 2013). It is also noted that this is still

considered to be an open issue, requiring further research (Cantador & Castells, 2012).

Implementation of trust. The trust requirement is derived from a review of the trust-based

recommender system where it was noted how trust is a more accurate, reliable, and far

reaching measure of relationship than similarity within collaborative filtering. Therefore, the

implementation of trust can provide the proposed group recommender model with the ability

to determine greater and more diverse recommendations.

Inference of trust. The inference of trust requirement states that it must be possible to

determine the trust valuation between two system users who do not have an explicit trust

relationship. This point was highlighted in the related research section for all trust-based

recommender systems. If this requirement is not met, then the system is more limited than a

collaborative filtering-based system since only explicit trust statements can be considered.

Generic implementation. The generic implementation requirement states that the

implementation of a trust-based group recommender system needs to be generic and not tied

to any single application context. Therefore, the system must be designed in such a way that

Page 44: PerTrust: leveraging personality and trust for group recommendations

Chapter 2 Group recommender systems

29

it is easy enough for it to be applied within any context as group recommendation can apply to

multiple contexts.

2.7 Conclusion

In this chapter, the topic of group recommendation was introduced and defined for the purposes of

deriving a list of requirements for the proposed group recommender model. This was achieved by

firstly providing a formal definition of group recommender systems for the purpose of ensuring a

common understanding. Thereafter, a number of recommender system types were presented and

evaluated for the purposes of motivating a suitable recommender system type for this research. The

conclusion of this evaluation was that a trust-based group recommender system would be

implemented. In the section following, the topic of group recommendation was introduced with the

process of group recommendation defined. Thereafter, a literature review was done on a number of

group recommender and trust-based recommender systems for the purpose of deriving a list of

requirements for this research. Finally, a list of requirements was presented for the proposed group

recommender model.

As was noted in both the final list of requirements for the proposed group recommender model as well

as in the motivation for a suitable recommender system type, trust plays an important role in the

generation of group recommendations. Therefore, because of its importance and because it is used in

multiple stages of the group recommendation process, a trust implementation framework is to be

defined by this research. This trust implementation framework identifies a candidate trust-based

algorithm to be used. Thereafter, group recommendation processes are detailed as a further

foundation for the work presented in this research.

As a result, the next chapter is the first chapter of those detailing the trust implementation framework.

The chapter introduces the trust network, which is relevant for this research as trust is visually

presented in this dissertation through a trust network. Therefore, in order to ensure an understanding

of trust networks, this chapter introduces these essential concepts.

Page 45: PerTrust: leveraging personality and trust for group recommendations

Part I(a) Trust implementation framework

Chapters 3-6

Page 46: PerTrust: leveraging personality and trust for group recommendations

Chapter 3 Trust network concepts

Page 47: PerTrust: leveraging personality and trust for group recommendations

Chapter 3 Trust network concepts

32

3.1 Introduction

In the previous chapter, a background to group recommender systems was given. This chapter

concluded with a list of requirements for the proposed group recommender model. In this list of group

recommender requirements, one of the main considerations was that of trust, and it was concluded

that this would be the basis for the definition of a trust implementation framework. In order to begin

defining this framework, however, trust networks need to be defined as many of the trust concepts

introduced in later chapters make use of these networks to visually present trust relationships.

Therefore, the definition of a trust network and its related concepts is the purpose of this chapter.

In order to define a trust network and its related concepts, this chapter is structured as follows. In the

first section, trust networks are formally defined. Thereafter, the visual representation of trust

networks is discussed. These visual representations make it simpler to analyse and study trust

relationships. Next, the two main types of trust networks used for viewing trust relationships is

defined, namely egocentric and sociocentric trust networks. The section concludes by motivating the

trust network implementation to be used in this research.

3.2 Definition

A trust network is a specific type of social network. Therefore, before a formal definition is given for a

trust network, a social network is defined.

A social network is defined as a network structure whereby system users are linked to each other via

a defined relational attribute (Al Falahi et al., 2012; Chung et al., 2005; Churchill & Halverson, 2005;

Marin & Wellman, 2011; Victor, 2010). Examples of relational attributes linking two system users

together are friend, work colleague, or family member. A trust network, therefore, is defined as a

social network in which the relational attributes linking system users together is that of trust (Victor,

2010). A trust link between system users reflects that one user trusts another user (Victor, 2010). This

definition is formally presented in Definition 3.1.

The relevance of a trust network in a recommender system is that recommender systems do not

typically consider the social or trust relationships between users (Al Falahi et al., 2012). Therefore,

trust networks provide a means to represent trust information in this research.

When studying trust networks, there are two common methods used to visually represent them. The

first method is the sociogram and the second method is an activity matrix (Balajik, 2002; Churchill &

Definition 3.1: Trust network

A trust network is defined as a social network structure in which the trust valuations linking

system users are visually presented (Victor, 2010).

Page 48: PerTrust: leveraging personality and trust for group recommendations

Chapter 3 Trust network concepts

33

Halverson, 2005; Hanneman & Riddle, 2005). These representation methods are discussed in the

section following.

3.3 Visually representing trust networks

The purpose of this section is to present the two most common methods of visually representing trust:

the sociogram and the activity matrix (Balajik, 2002; Churchill & Halverson, 2005; Hanneman &

Riddle, 2005). The reason for such visual representations in this research is to ensure a common

understanding when discussing trust in future sections. The section concludes with a motivation for

which one is used in this research.

3.3.1 Sociogram

The sociogram was developed by a United States sociometric analyst named Jacob Moreno in 1934

(Balajik, 2002; Chung et al., 2005; Churchill & Halverson, 2005). In it, he visually represented social

relations between people by using dots to represent a system user and connected lines to represent

the relationship between two people (Balajik, 2002; Chung et al., 2005; Churchill & Halverson, 2005).

The use of the sociogram is still commonly applied today and is used as a basis for studying and

analysing social networks. It is both a visually effective and easy to interpret method of representing a

social or trust network. An example of a basic sociogram is shown in Figure 3.1 below. This figure

presents a formal trust relationship between a user, Adam, and another user, Ben.

BenAdam

Figure 3.1: Basic sociogram representing a relationship between Adam and Ben

Formally, with reference to Figure 3.1, Adam and Ben are referred to as nodes or actors, whereas the

line connecting them is referred to as an edge (Al Falahi et al., 2012; Churchill & Halverson, 2005;

Hanneman & Riddle, 2005; Markides, 2011). In a trust network, the edge represents trust.

3.3.2 Activity matrix

The activity matrix is a table with rows and columns. The rows and columns represent the actors or

nodes with each cell containing a representative measure of the relationship between them (Churchill

& Halverson, 2005; Hanneman & Riddle, 2005; Markides, 2011). An example of a trust-based activity

matrix is shown below in Table 3.1. In this activity matrix, a trust relationship is presented either with a

1 to show that a trust relationship exists or a 0 to show that a trust relationship does not exist.

Page 49: PerTrust: leveraging personality and trust for group recommendations

Chapter 3 Trust network concepts

34

Adam Ben

Adam - 1

Ben 1 -

Table 3.1: Activity matrix representing a trust relationship between Adam and Ben

Within an activity matrix, there are a few additional points to take note of.

An activity matrix is read from a row perspective. Therefore, with reference to Table 3.1, a

relationship is determined between two users by identifying the subject user in the row, and

then looking for the corresponding user in the column.

The main diagonal of the matrix (Hanneman & Riddle, 2005). With reference to Table 3.1,

it is noted that a system user cannot have a relationship with themselves. Consequently,

these values are ignored.

3.3.3 Motivating a visual representation of trust networks

The method by which trust relationships are presented in this research is the sociogram, as it is easier

to visually determine and interpret the various trust relationships between system users. This is

particularly relevant when larger trust networks are presented for discussion.

3.4 Measuring and representing trust relationships

There are various metrics and methods to identify the strength and type of relationship one node has

with another. Therefore, the purpose of this section is to introduce these means as well as to define

how they are applied and represented within a sociogram.

3.4.1 Relationship flow

Within any relationship and relationship context, there is a flow of relationship information. In a trust

network, there are typically three types of relationship flows: one-way, mutual, and existence. Each of

these relationship flows are defined below.

a) One-way

In a one-way relationship flow, a relationship flows from node A to node B, but not from node

B to node A (Hanneman & Riddle, 2005). Therefore, all relationship information flows outward

from a particular node. An example of how this relationship is represented in a sociogram is

shown in Figure 3.2 below.

Page 50: PerTrust: leveraging personality and trust for group recommendations

Chapter 3 Trust network concepts

35

BenAdam

Figure 3.2: Sociogram representing a one-way relationship flow between Adam and Ben

Within a sociogram, relationship flow is shown by using the edge and attaching it with an

arrow head (Hanneman & Riddle, 2005). This arrow head reflects the direction of the

relationship (Hanneman & Riddle, 2005). Therefore, with a one-way relationship flow, the

arrow head indicates the flow of relationship in a single direction.

b) Mutual

In a mutual relationship flow, a relationship flows from node A to node B as well as from node

B to node A (Hanneman & Riddle, 2005). It is important to note, however, that the strength of

these relationship flows may not be the same for both nodes. An example of how a mutual

relationship flow is represented in a sociogram is shown in Figure 3.3 below.

BenAdam

Figure 3.3: Sociogram representing a mutual relationship flow between Adam and Ben

In this sociogram, the relationship flow is bidirectional. Therefore, the arrow head indicates

that the relationship flow occurs both ways.

c) Existence

In an existence relationship flow, there is no indicated relationship flow. Rather, there is

simply a connection from a node A to a node B to indicate that a relationship exists

(Hanneman & Riddle, 2005). This relationship flow is shown in Figure 3.4 below.

BenAdam

Figure 3.4: Sociogram representing an existence relationship flow between Adam and Ben

Because there is no indication of relationship flow in the existence relationship flow, no arrow

heads are required. Consequently, the two nodes are simply connected.

Page 51: PerTrust: leveraging personality and trust for group recommendations

Chapter 3 Trust network concepts

36

Formally, trust networks where the relationship flow is indicated are called directed trust networks

(Hanneman & Riddle, 2005; Marin & Wellman, 2011). Conversely, those trust networks where no

relationship flow is indicated are called simple trust networks (Hanneman & Riddle, 2005; Marin &

Wellman, 2011). Therefore, one-way relationship flow and mutual relationship flow are defined as

directed trust networks, while existence relationship flow is defined as a simple trust network.

In this section, the representation of the various relationship flows was discussed with application to

sociograms. However, though relationship flows were identified, there needs to be a measure of the

relationship flow that exists (Hanneman & Riddle, 2005). The various metrics and measures of trust

relationships within a trust network are discussed in the next section.

3.4.2 Measuring relationship flows

Metrics and measurements indicate the strength of the edge between two nodes. Typically, there are

four broad categories of relationship flow measures often used: binary, signed, ranked, and interval

measures (Hanneman & Riddle, 2005).

a) Binary measure

The binary measure reflects the existence of a trust relationship with a yes or a no. A value of

1 represents existence and 0 no existence (Hanneman & Riddle, 2005). The relevant

sociogram is presented in Figure 3.5 below.

BenAdam

Figure 3.5: Sociogram representing a binary relationship measure between Adam and Ben

b) Signed measure

The signed measure indicates the strength of a relationship by making use of a scale with a -

1, 0, or +1. A value of -1 indicates a bad relationship, a value of 0 indicates a neutral

relationship, and a value of +1 indicates a good relationship (Hanneman & Riddle, 2005). The

relevant sociogram is presented in Figure 3.6 below.

BenAdam

+1

Figure 3.6: Sociogram representing a signed measure between Adam and Ben

Page 52: PerTrust: leveraging personality and trust for group recommendations

Chapter 3 Trust network concepts

37

c) Ranked measure

The ranked measure represents the strength of a relationship by using a ranking system. This

means that one would typically consider each person in the trust network and assign 1 to the

person they trusted most, 2 to the person they trusted second most, and so forth (Hanneman

& Riddle, 2005). The relevant sociogram is presented in Figure 3.7 below.

BenAdam

1

Craig

2

Figure 3.7: Sociogram representing a ranked measure between Adam and Ben

d) Interval measure

The interval measure uses a scale to represent the strength of a relationship. For example,

the scale could be bounded between 0 and 100 with 0 indicating a poor relationship and 100

an extremely strong relationship. In consequence of this, each person in the trust network is

attributed with a number to indicate the strength of trust between nodes (Hanneman & Riddle,

2005). The relevant sociogram is presented in Figure 3.8 below.

BenAdam

80

Figure 3.8: Sociogram representing an interval measure between Adam and Ben

In the next section, the two generic ways of representing a trust network as a whole is considered.

3.5 Types of trust networks

Within trust networks, there are two generic types defined: egocentric trust networks and sociocentric

trust networks. In this section, both types of trust networks are introduced and discussed.

3.5.1 Egocentric trust network

An egocentric trust network is defined as a trust network concentrated on the trust valuations of a

single user, called the ego (Balajik, 2002; Chung et al., 2005; DeJordy & Halgin, 2008; Hanneman &

Riddle, 2005; Marin & Wellman, 2011; Markides, 2011). The purpose of analysing an egocentric trust

network is to study an individual’s trust network and determine if there are any generalizations that

Page 53: PerTrust: leveraging personality and trust for group recommendations

Chapter 3 Trust network concepts

38

can be found or linked with in the broader trust environment (Balajik, 2002; Markides, 2011). An

example of an egocentric trust network with Adam as the ego is shown below in Figure 3.9.

Ben

Adam

Craig

Darren

Ed

Figure 3.9: An egocentric trust network about Adam, the ego

An egocentric trust network is formed by firstly identifying the ego. The ego is defined as the person

whose trust network one would like to study (Chung et al., 2005; Marin & Wellman, 2011). Thereafter,

all nodes that have a trust connection to the ego are identified and added to the trust network of the

ego. These other nodes are referred to as alters (Chung et al., 2005). It is important to note that the

trust network does not go beyond the nodes connected to the ego. Therefore, only those nodes

directly linked in relationship to the ego are added to the trust network (Hanneman & Riddle, 2005).

3.5.2 Sociocentric trust network

A sociocentric trust network is defined as a trust network concentrated on the entire social structure of

a group of people within a specific context (Balajik, 2002; Chung et al., 2005; DeJordy & Halgin, 2008;

Hanneman & Riddle, 2005; Marin & Wellman, 2011; Markides, 2011). Therefore, a sociocentric trust

network would not be about the trust network of a single user, but the trust network of every single

user within a group within a context (Balajik, 2002; Marin & Wellman, 2011). The purpose of a

sociocentric trust network is a far wider and broader analysis of an entire group in order to determine

trust patterns as well as to analyse the trust structure of a network (Chung et al., 2005; Hanneman &

Riddle, 2005). An example of a sociocentric trust network is given below in Figure 3.10.

Page 54: PerTrust: leveraging personality and trust for group recommendations

Chapter 3 Trust network concepts

39

Ben

Adam

Craig

Darren

Ed

Figure 3.10: A sociocentric trust network about Adam and his friends

A sociocentric trust network is formed by defining a specific context and then identifying all

relationships within that context (Balajik, 2002; Markides, 2011). The trust network is formed once

every relevant node is identified and represented. Thereafter, trust connections are processed

between each member of the group. However, it is quite possible that there could be disconnected

nodes or even disconnected subsets of social networks if no relationship connections are identified.

In the next section, all of the aforementioned sections are considered for motivating the type of trust

network that is used in this research.

3.6 Motivating a egocentric, directed, and interval measured trust network

For the purposes of this research in the analysis of trust as well as group recommendation, an

egocentric, directed, and interval measured trust network is motivated. The motivation for each of

these considerations is as follows.

a) Egocentric

In this research, it is relevant for an egocentric trust network to be implemented. The first

reason for this is that trust is always studied from the perspective of a single user. Therefore,

only those trust relationships relevant to a specific node need to be considered. However, in

this research, it is noted that these egocentric trust networks are extended beyond the ego to

include further trust networks. Yet, these further trust networks are always based on those

nodes directly linked to the ego. The purpose for this extension is due to the inference of trust.

This is explained in further chapters. An example of such an extension is given in Figure 3.11

below.

Page 55: PerTrust: leveraging personality and trust for group recommendations

Chapter 3 Trust network concepts

40

Ben

Adam

Craig

Darren

Ed

Figure 3.11: An extended egocentric trust network about Adam and his friends

The second reason for the implementation of an egocentric trust network is that it more

personalised. This is because the definitions of trust are analysed from a personal

perspective i.e. about the ego.

b) Directed

In this research, a directed, one-way relationship flow is implemented in representing trust.

Because an egocentric trust network is being used, trust is only analysed from the

perspective of the ego. Therefore, the focus is on the trust edges attributed by the ego to the

alters. Consequently, all trust edges flow out from the ego towards the alters.

c) Interval measure

In this research, trust is presented as an interval measure. Formally, this is defined within

trust networks as a gradual representation of trust (Victor, 2010; Victor et al., 2011). Most

recommender systems adopt a gradual representation of trust because, as in real life, an all-

or-nothing trust representation is not feasible (Victor, 2010; Victor et al., 2011). Consequently,

in every defined trust relationship, there is some sort of a scale of trust. Therefore, an interval

measure of trust in trust networks is implemented in this research.

In light of the aforementioned points, it is motivated that the best trust network implementation for this

research is an egocentric, directed, and interval measured trust network.

3.7 Conclusion

In this section, a number of trust network concepts were defined. The chapter began with a formal

definition of a trust network. Following on from this, the main methods of visually representing trust

networks were defined, with the sociogram motivated as the chosen method of representing trust for

this research. The next section then built upon the basic definition of a sociogram by extending how

trust relationship flows are defined as well as how trust can be presented and measured in a

sociogram. Thereafter, the two main types of trust networks, egocentric and sociocentric trust

Page 56: PerTrust: leveraging personality and trust for group recommendations

Chapter 3 Trust network concepts

41

networks, were defined and presented as the two generic different perspectives of viewing trust

relationships. Finally, the chapter concluded with a motivation for an egocentric, directed, and interval

measured sociogram to be used for the representation of trust networks in this research.

The definition of these trust networking concepts and the selection of a method for representing trust

in this research is relevant in the chapters following. In these chapters, a trust implementation

framework for the proposed group recommender model is defined. The concepts defined in this

chapter are used as a basis for discussing this trust implementation framework. Therefore, in the next

chapter, the definition of this trust implementation framework begins with a background discussion on

the implementation of trust in recommender systems.

Page 57: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

Page 58: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

43

4.1 Introduction

In the previous chapter, a number of trust network concepts were defined. This chapter motivated the

visual implementation and interpretation of trust through trust networks. The conclusion of this chapter

was that a sociocentric, directed, and interval measured trust network is the best method of visually

implementing and interpreting trust for this research.

This chapter and the ones following detail a framework for the implementation of trust in the proposed

group recommender model. The result of this framework is a suitable trust-based algorithm for

calculating trust and leveraging trust in order to generate recommendations. The purpose of this

chapter, therefore, is to start with the foundation of this trust framework by introducing the concept of

trust, specifically trust within recommender systems.

Trust is a very common topic and is applicable to many fields, such as economics, law, business, and

computer science (Golbeck, 2005). However, the difficulty posed by the use of trust in these many

different domains is that trust is both difficult to define and inherently complex. For instance, trust is

based on issues such as past experiences with a particular person, psychological factors, how a

person is viewed by others, their personality, and so forth (Bhuiyan, 2011; Golbeck, 2005). As a

result, trying to transfer trust from a real world concept to an online concept is difficult (Bhuiyan, 2011;

Golbeck, 2005). Therefore, the concept of trust within this chapter is limited to the scope of

recommender systems.

The relevance of trust in recommender systems is that it aids such systems in the generation of

reliable and quality recommendations. The motivation for this is based on a simple premise noted by

Golbeck (2005). Golbeck (2005) states that people are more likely to rely upon and consider

recommendations given by people they know and trust rather than strangers. In other words, to some

degree, trust parallels user similarity (Golbeck, 2005). The more one trusts someone, the more likely

they are to agree with their recommendation.

Therefore, through the implementation of trust, the recommender system can ascertain the reliability

and quality of a recommendation by only considering the recommendations of the most trusted users

and filter out the recommendations given by untrusted users (Golbeck, 2005). In the real world, this is

directly applicable. For example, when considering whether to go to visit a tourist destination or not,

one often asks for the opinion of friends, families, or even a credible critic. Such feedback is typically

taken into consideration and influences a final decision to visit to that tourist destination or not.

In order to begin defining the trust implementation framework for this research, this chapter is

structured as follows. The first section provides a formal definition of trust within recommender

systems. The next section lists the properties of trust. These properties as well as the trust network

concepts defined in the previous chapter are then referenced in the following section, which details

Page 59: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

44

how trust is calculated within a recommender system. Finally, the last section concludes the chapter

and looks ahead to the next stage in defining a suitable trust implementation framework for this

research, namely how trust is leveraged to calculate system recommendations.

4.2 Defining trust

There is little consensus as to a particular and correct definition of trust within recommender systems

(Victor, 2010). This is well highlighted in Hussain et al.’s (2007) extensive overview of trust and

reputation. As a result, the purpose of this section is to study multiple definitions of trust from literature

and derive a simple and understandable definition of trust suitable to this research.

The first definition of trust is from a paper by Aris et al. (2011). They state that trust consists of two

main concepts, namely risk and reliance (Aris et al., 2011). They define risk as “… the potential for

negative outcomes to occur” (Aris et al., 2011), while reliance is defined as “…an action through

which one party permits its fate to be determined by another” (Aris et al., 2011). From this definition,

two concepts are derived. Firstly, a judgement is made as to whether there are negative

consequences. Secondly, the dependence upon another user regarding the outcome of a particular

action is highlighted.

A similar definition is used by Victor (2010), taken from the trust definition given by Jøsang and Presti

(2004). Jøsang and Presti (2004) define trust as the “…subjective probability by which an agent can

have a level of confidence in another agent in a given scope, to make a recommendation.” Again, this

highlights the potential for negative consequences. However, it additionally highlights the subjectivity

of trust, that it is a personalised, subjective concept.

A third definition by Jøsang (2011) distinguishes between two types of trust. The first type of trust is

evaluation trust and the second type of trust is decision trust. Evaluation trust is defined as “…the

evaluation of something or somebody independently of any actual commitment.” Decision trust,

“...indicates that the relying party has actually made a commitment to depend on the trusted party."

These definitions of trust indicate that for trust to be effective, it needs to lead to a commitment by the

person or in the case of recommender systems, by the user.

A fourth definition of trust in recommender systems is given by Bhuiyan (2011). Trust is defined as

“…a subjective probability by which an agent can have a level of confidence in another agent in a

given scope, to make a recommendation.” The importance of this definition alludes to the fact that

trust is always limited to a specific context. As a result, the trust between two people may be strong in

one context, but not as strong in another context.

A fifth and final definition on trust is found in Golbeck’s (2005) dissertation. Here, trust means that a

user A trusts a user B if they commit to a specific action in the belief that the future actions

Page 60: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

45

undertaken by user B will result in a good outcome. The two components contained in this definition

are belief and commitment. Belief indicates that there is a belief that a user will act in a specific way.

Commitment indicates that there needs to be a commitment upon that belief for trust to occur. In

addition, as noted by Golbeck (2005), the action requiring trust does not have to be a major one and

also the foundation for belief in an action is entirely subjective and will, therefore, vary between users.

In light of the above definitions, the definition of trust that is used for this research is based on

Golbeck’s (2005), with a few modifications. This definition is presented in Definition 4.1 below.

It is both a simple, understandable, yet comprehensive definition, which comprises all of the findings

of the afore-listed definitions of trust. Such a definition is felt to be relevant and necessary for a group

recommender system. The importance of the definition is that while such a system may generate

quality recommendations, a high level of trust in a recommendation should reflect a commitment by a

user to act on it.

In the next section, the various properties associated with trust, in the context of a recommender

system, are discussed and assessed. The purpose of defining these properties of trust is to provide a

foundation for the calculation of trust in the proposed trust implementation framework.

4.3 Properties of trust

In this section, the various properties of trust within recommender systems are presented and defined.

These properties of trust form the foundation of calculating trust and consequently, determining

recommendations within the proposed trust implementation framework. The trust properties to be

discussed are transitivity, composability, and personalisation and asymmetry.

4.3.1 Transitivity

The transitivity property of trust within a recommender system is defined as the potential of trust to be

passed from one user to another (Andersen, 2011; Bhuiyan, 2011; Golbeck, 2005; Johnson et al.,

2011; Victor, 2010; Victor et al., 2011). This property is best illustrated by considering Figure 4.1

below.

Definition 4.1: Trust

Trust is defined as the commitment by a user A to a specific action, within a specific

context, in the subjective belief that the future actions undertaken by another user B will

result in a good outcome, though accepts that negative consequences may otherwise

occur.

Page 61: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

46

Adam Ben Craig

Figure 4.1: Transitivity property of trust

Figure 4.1 presents a small egocentric and directed trust network, containing three nodes: Adam,

Ben, and Craig. The transitivity property of trust states that even though Adam does not directly trust

Craig, a measure of trust is inferred based on the fact that Adam trusts Ben who, in turn, trusts Craig

(Golbeck, 2005).

The transitivity property of trust is broken down into two sub-levels of trust, namely referral trust and

functional trust (Andersen, 2011; Bhuiyan, 2011; Jøsang, 2011). Referral trust refers to the

recommendation passed down from one user to another, while functional trust refers to the trust

passed down from a user who has actually experienced the trustworthiness of the action of another

user. Both of these types of trust are then further defined as either direct or indirect. This breakdown

of trust transitivity is shown with reference to Figure 4.2 below.

Adam Ben Craig

Direct Referrral

Trust

Direct Functional

Trust

Indirect Functional

Trust

Figure 4.2: Trust transitivity - Functional and referral trust (Jøsang, 2011)

In the above figure, the direct referral trust is between Adam and Ben as Adam has a direct trust

relationship with Ben. Direct functional trust exists between Ben and Craig since Craig is trusted by

Ben. The indirect functional trust is between Adam and Craig as he has to decide whether he trusts

Craig or not. Therefore, in light of this definition, the transitivity property of trust is further defined as

functional trust passed between users via referral trust (Andersen, 2011; Bhuiyan, 2011; Jøsang,

2011).

Page 62: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

47

There are a few important things to note about the transitivity property of trust.

Trust is not purely transitive from a mathematical perspective (Golbeck, 2005; Victor et

al., 2011). This means that if Adam highly trusts Ben and Ben highly trusts Craig, it will not

necessarily follow that Adam will highly trust Craig.

The transitivity property of trust is contextual (Andersen, 2011; Bhuiyan, 2011;

Johnson et al., 2011; Victor et al., 2011). This means that trust can only be passed within a

specific context, such as tourist attractions or movies.

The transitivity property of trust means that trust is passed down to many people

(Andersen, 2011; Golbeck, 2005). It is important to note that in such scenarios, trust

degrades since it is passed between many users (Andersen, 2011; Golbeck, 2005). In other

words, the quality of trust becomes less as it is passed further down a path of users.

4.3.2 Composability

The composability property of trust within a recommender system means that one is able to compose

multiple sources of information in order to make a decision on trust (Golbeck, 2005). The greater the

number of information sources, the greater the basis for trust in a particular user (Golbeck, 2005). The

composability property of trust is shown with reference to Figure 4.3 below.

Adam

Ben Craig

Darren

Ed Franck

Figure 4.3: Composability property of trust

From Figure 4.3, it is seen from the egocentric, directed graph that trust is passed along two paths.

The first path goes from Darren to Craig to Ben and then finally to Adam. The second path goes from

Darren to Franck to Ed and then to Adam. The composability property of trust says that Adam can

compose the trust information from these multiple paths and make use of it to determine an

appropriate level of trust in Darren.

Page 63: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

48

4.3.3 Personalisation

The personalisation property of trust within a recommender system means that trust, as a value,

needs to be personal (Bhuiyan, 2011; Golbeck, 2005; Johnson et al., 2011; Massa & Avesani,

2007a). The reason for this is that trust is a subjective measure and, therefore, what one user deems

as trustworthy may not be what other users deem as trustworthy (Bhuiyan, 2011; Golbeck, 2005;

Johnson et al., 2011; Massa & Avesani, 2007a). Consider the following scenario, as outlaid in Figure

4.4, as a basis for discussing the personalisation property of trust.

Adam Craig

Ben Darren

Apartheid Museum

Figure 4.4: Personalisation property of trust

Assume that Adam and Ben want to go to visit the Apartheid museum. In order to determine a

recommendation for the tourist attraction, both consult their friends, Craig and Darren respectively.

Craig previously went to the museum and he had a good time. As a result, he gave it a high

recommendation. Darren, on the other hand, went to the museum but experienced poor service.

Consequently, he gave the museum a poor recommendation. As a result, Adam now retrieves a high

recommendation and Ben retrieves a low recommendation from Craig and Darren respectively. Such

a scenario highlights how trust is personalized and subjective.

4.3.4 Asymmetry

A property of trust related to the personalisation property of trust is asymmetry. This property in

recommender systems means that although two users trust each other, their trust for each other is

different (Golbeck, 2005; Johnson et al., 2011; Massa & Avesani, 2007a). A simple example laying

out the asymmetry property of trust is represented in Figure 4.5 below.

Page 64: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

49

Adam Ben

I trust Ben x amount

I trust Adam y amount

Figure 4.5: Asymmetry property of trust

In Figure 4.5, Adam and Ben trust each other, but in a different manner. Adam trusts Ben by x, a high

value of trust. However, Ben sees his relationship with Adam differently and only trusts him y amount,

a lower value of trust. Therefore, trust is asymmetrical, with the trust value dependent upon the

perspective of each user.

In this section, a number of the main properties of trust within recommender systems were outlaid and

discussed. In the next section, these properties of trust are leveraged in detailing the how trust is

calculated within a recommender system. The calculation of trust is one of the main factors in

determining recommendations. The detailing of this process will assist in forming the trust

implementation framework for the proposed group recommendation model.

4.4 The implementation of trust in recommender systems

Within a trust-based recommender system, trust can either be explicitly defined or implicitly

calculated. Explicit trust occurs when a system user explicitly attributes a trust valuation towards

another system user. However, there are occasions when trust needs to be implicitly determined

between two users who do not know one another. The purpose of this section is to detail, from a high

level perspective, how trust is calculated in recommender systems in such circumstances. This high

level process will provide a broad basis for the trust implementation framework proposed for this

model.

4.4.1 Calculating trust at a high level

In order to define the implicit calculation of trust at a high level, a trust network is used. Therefore,

consider a scenario in which implicit trust would have to be calculated. Such a scenario is presented

in an egocentric, directed, and interval measured social network in Figure 4.6 below. In this trust

network, a trust valuation is bound between 0 and 10. A trust valuation of 0 indicates no trust, while a

trust valuation of 10 indicates full trust. In this scenario, the trust level between Adam and Franck has

to be calculated as there is no explicit trust valuation defined between them. Therefore, trust has to be

implicitly assigned or inferred between them (Golbeck, 2005).

Page 65: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

50

Ben

Darren

Adam

Ed

8

7

9

8

Craig Franck

6

?

8

Graham

8

Figure 4.6: Determining trust between Adam and Franck

With reference to Figure 4.6, how would Adam determine whether Franck is trustworthy? He can base

it on his explicit trust in Craig, and Craig’s explicit trust in Franck. Additionally, it includes Adam’s

explicit trust in Darren, Darren’s explicit trust in Graham, and Graham’s explicit trust in Franck. Such

information sources are taken into account by Adam to determine whether Franck is trustworthy. This

is intuitive in both a recommender system as well as in a real world scenario.

In terms of the trust properties defined in the previous section, each of these properties are

considered when calculating trust. Firstly, the transitivity property is applied as indirect functional trust

is being determined between Adam and Franck via functional trust passed between users via referral

trust. Secondly, the composability property of trust is applied as both of the trust paths linking Adam to

Franck are considered. Thirdly, the personalisation property of trust is considered as the basis for a

calculated trust value is the explicit trust values between each user. Lastly, the asymmetry property of

trust is considered as trust is calculated one way out from the ego.

The next section details more specifically how a trust valuation can be determined between Adam and

Franck through the use of a trust metric.

4.4.2 Calculating trust with a trust metric

One of the means by which a trust valuation is calculated between two strangers is via an algorithm,

aligned with the properties of trust. This is the most common means by which trust is calculated within

recommender systems and the method adopted for this research as a result (Victor, 2010). An

algorithm that calculates trust between two users with no explicitly defined trust between them is what

is defined as a trust metric (Massa & Avesani, 2007a; Victor, 2010; Victor et al., 2011).

Page 66: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

51

When calculating trust within a trust network, trust metrics typically follow two generic processes,

namely propagation and aggregation (Victor, 2010; Victor et al., 2011). Both are necessary in

determining a trust value that is accurate and reliable. How trust metrics perform each of these

processes is unique to each one and is considered more closely in further sections. For now,

however, each of these processes is defined and discussed at a high level.

a) Propagation

Victor (2010) defines trust propagation as the prediction of trust from a system user directed

along a defined trust path in a trust network to another system user. In this instance, the

system user requesting trust is called the source user and the system user to whom trust is

being directed is called the target user. Therefore, with reference to Figure 4.6 above, Adam

is the source user and Franck is the target user. The defined trust path in this case is all

possible trust paths that link Adam to Franck. There are two such trust paths in this case. One

trust path links Adam to Craig and Craig to Darren. The other trust path links Adam to Darren,

Darren to Graham, and Graham to Franck.

The propagation of trust relies heavily on the transitivity property of trust (Victor, 2010; Victor

et al., 2011). Therefore, propagation uses the explicitly defined trust values along a trust path

to infer a trust value between the source user and the target user. The most common means

to do this is through multiplication, though there are many other means by which one can infer

a trust rating through propagation (Victor, 2010).

An important concept to note within trust propagation is what is known as trust decay (Victor,

2010). Trust decay states that the further down a trust path trust is propagated or

alternatively, the further away from the source trust is propagated, the less reliable and

trustworthy the propagated trust value becomes (Golbeck, 2005; Massa & Avesani, 2007b;

Victor, 2010). This is again, true in real life. One is more likely to give greater consideration

and weight to the recommendation of a friend’s friend, than a friend who heard the

recommendation from a friend who themselves, heard the recommendation from their friend.

Therefore, with reference to the scenario, the trust path linking Adam to Craig and Craig to

Franck is less affected by trust decay than the trust path linking Adam to Darren, Darren to

Graham, and Graham to Franck.

How each propagation algorithm caters for trust decay is specific to the algorithm. However,

most algorithms generally attempt to keep track of how far from the source user trust is being

propagated and show some sort of preference for the shortest possible trust path linking the

source user to the target user (Golbeck, 2005; Massa & Avesani, 2007a; Massa & Avesani,

2007b; Victor, 2010). Again, this is discussed in future sections.

Page 67: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

52

b) Aggregation

Aggregation is defined as the process of aggregating all propagated trust paths between the

source user and the target user (Victor, 2010; Victor et al., 2011). With reference to the

scenario in Figure 4.6, assume that the propagation algorithm used is a simple average

mean. Therefore, the trust path between Adam and Craig and Craig and Franck, would be

determined as per Calculation 4.1 below.

Therefore, the trust path between Adam and Craig and Craig and Franck results in a

propagated trust value of 6.5 out of 10. Similarly, the trust path between Adam and Darren,

Darren and Graham, and Graham and Franck is 8.3. An aggregation algorithm determines

how to aggregate these two propagated trust values into a single trust value for Adam. This

makes use of the composability property of trust as both of the propagated trust values are

used as sources in determining a final trust value.

As with propagation algorithms, there are many different aggregation algorithms which are

used when aggregating trust values (Victor, 2010; Victor et al., 2011). Some aggregation

algorithms are simple like using the maximum or the average, while others make use of more

complex aggregation algorithms (Victor, 2010; Victor et al., 2011). Again, how individual

algorithms perform the aggregation task is specific to their definition.

4.5 Conclusion

The purpose of this chapter was to begin detailing a framework for the implementation of trust in the

proposed group recommender model. This was done through a background discussion on trust in

which a number of sections were presented. In the first section, a formal definition was given for trust

to ensure a common understanding. In the section following, a number of trust properties were

Calculation 4.1: Aggregation: Average mean

𝑡𝐴,𝐹 = 𝑡𝐴,𝐶 + 𝑡𝐶,𝐹

𝑡𝐴,𝐹

𝑡𝐴,𝐹 = 7 + 6

2

𝑡𝐴,𝐹 = 6.5

To determine the average mean between source user Adam and target user Franck, where 𝑡𝐴,𝐹

represents the trust value between Adam and Franck, 𝑡𝐴,𝐶 represents the trust value between

Adam and Craig, 𝑡𝐶,𝐹 represents the trust value between Craig and Franck, and 𝑡𝐴,𝐹 represents

the number of nodes between Adam and Franck:

Page 68: PerTrust: leveraging personality and trust for group recommendations

Chapter 4 A background to trust in recommender systems

53

defined. These trust properties were then leveraged in detailing how trust is typically calculated at a

high level. Specifically, the transitivity property of trust is used in the propagation process of trust

calculation; the composability property of trust is used in the aggregation process of trust calculation;

the asymmetry property of trust is always implemented within trust networks as trust flows outward

from the source; and personalisation is always implemented within an egocentric trust network since it

only represents close trust relationships about the ego.

Therefore, the result of this chapter in the definition of a trust implementation framework is twofold.

A common basis for understanding trust has been established.

A metric methodology of inferring trust is to be implemented.

In the next chapters, the high level methodology for calculating trust via a metric is detailed further by

discussing and evaluating a number of state-of-the-art trust calculation and trust-based

recommendation algorithms. These algorithms will practically detail how trust can be inferred and how

it can be leveraged to generate recommendations. The conclusion of these chapters, with reference

to the definition of a trust implementation framework, is that a trust calculation and trust-based

recommendation algorithm is chosen. These algorithms will then be used as the basis for

implementing trust in the proposed group recommender model.

Page 69: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

Page 70: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

55

5.1 Introduction

In the previous chapter, it was noted that a trust implementation framework would be defined for the

purposes of the proposed group recommender system model. The chapter concluded with a high

level overview of calculating trust, including the main processes of propagation and aggregation.

The purpose of this chapter is to continue with a more detailed formation of a trust implementation

model. This is achieved by detailing three different areas.

The presentation and review of a number of state-of-the-art trust calculation algorithms.

The presentation and review of a number of trust-based recommendation algorithms which

implement the defined trust calculation algorithms. These trust-based recommendation

algorithms make use of trust calculation algorithms to determine a predicted rating for a

specific user for a specific recommendation item. Therefore, this area details how

recommender systems leverage trust to generate recommendations.

The determination of a list of requirements for a trust calculation algorithm as well as for a

trust-based recommendation algorithm for this trust implementation framework. These set of

requirements form the basis of comparison between various trust calculation and trust-based

recommendation algorithms.

Once each trust calculation and trust-based recommendation algorithm has been presented and

compared, an empirical evaluation of these trust-based algorithms is performed in the next chapter.

Therefore, the final trust implementation framework defined for the proposed group recommender

model is based on how well a trust calculation and trust-based recommendation algorithm meets the

specified requirements as well as how well each algorithm performs in comparison to its counterparts.

The conclusion of these next two chapters, consequently, is a nominated trust calculation and trust

recommendation algorithm for the trust implementation framework and the proposed group

recommender model.

Structurally, this chapter is defined as follows. In section 5.2, a set of requirements for a suitable trust-

based algorithm for the trust implementation framework is defined. Following is section 5.3, where a

scenario is presented. The purpose of defining this scenario is that it is used as a reference when

detailing each state-of-the-art trust-based algorithm. Section 5.4 then discusses each of the trust-

based algorithms. Thereafter, section 5.5 summarises each trust-based algorithm and analyses the

results. Section 5.6 concludes the chapter.

5.2 Requirements for a trust algorithm

A set of requirements for a suitable trust calculation and trust-based recommendation algorithm is

now identified to be used as a basis for evaluating a number of state-of-the-art trust-based algorithms.

Page 71: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

56

Thereafter an appropriate trust-based algorithm is nominated for the defined trust implementation

framework for this research.

A literature review of previous research reveals the following requirements for a suitable, reliable, and

effective trust calculation and trust-based recommendation algorithm for the proposed group

recommender model. The identified requirements are listed below.

Accuracy: One of the most common base requirements within any recommendation

algorithm is that of accuracy (O’Donovan & Smyth, 2005; Massa & Avesani, 2007a; Massa &

Avesani, 2007b; Shani & Gunawardana, 2011; Victor et al., 2011). Massa and Avesani

(2007a) define accuracy as the error when calculating a recommendation rating for a

particular item for a particular source user. Therefore, the smaller the error between the

calculated recommendation rating and the actual rating, the more accurate an algorithm is

deemed to be.

Coverage: Another common base requirement for recommender systems, in general, is

coverage (Massa & Avesani, 2007a; Massa & Avesani, 2007b; Shani & Gunawardana, 2011;

Victor et al., 2011). Coverage is defined as the number of user-recommendation item pairs

which can be generated (Massa & Avesani, 2007a; Shani & Gunawardana, 2011; Victor et al.,

2011). The greater the coverage of an algorithm, the greater the probability for a particular

user to obtain a recommendation for a requested item.

Cold start problem: A prevalent issue within recommender systems is what is commonly

known as the “cold start” problem or “bootstrapping” (Massa & Avesani, 2007b; Shani &

Gunawardana, 2011; Victor, 2010; Victor et al., 2011). The cold start problem is defined as

the issue in ensuring that new users are able to have recommendation items generated for

them despite having rated no items or having no trust links defined for them (Massa &

Avesani, 2007b; Quan & Hinze, 2008; Shani & Gunawardana, 2011; Victor, 2010; Victor et

al., 2011). Similarly, this is an issue for new system items which have just been added and

have no ratings attached (Massa & Avesani, 2007b; Shani & Gunawardana, 2011; Victor,

2010; Victor et al., 2011). The importance of this requirement is that if a new user is unable to

have a recommendation generated for them, they are unlikely to continue making use of the

system (Massa & Avesani, 2007b). Therefore, a relevant trust-based algorithm must, in some

form, alleviate this issue.

Data sparsity: Data sparsity is defined as the concern whereby there are only a relative few

items which have been rated in the system (Quan & Hinze, 2008; Victor, 2010). Because so

few items have been rated, it makes it more difficult for items to be recommended to other

users in the system (Massa & Avesani, 2007b; Quan & Hinze, 2008; Victor, 2010). As a

result, a requirement for a trust-based algorithm is to help overcome this issue.

Personalisation: The trust-based algorithm for this trust implementation framework should

adopt a local perspective. As discussed in the previous chapter, this means that only the trust

relationship between two individual users is focused upon, as opposed to the entire

community’s trust in a specific user (Golbeck, 2005; Massa & Avesani, 2007a; Massa &

Page 72: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

57

Avesani, 2007b). The greatest advantage to this is that it ensures that trust is personalised

since trust is evaluated from an individual’s perspective (Golbeck, 2005; Massa & Avesani,

2007a).

Degradation: Because trust is inferred along a trust path, the trust-based algorithm must

cater for what is known as trust degradation or decay (Golbeck, 2005; Massa & Avesani,

2007b; Victor, 2010; Victor et al., 2011). As was discussed in the previous chapter, trust

decays the further away a trust value is propagated from the source (Golbeck, 2005; Massa &

Avesani, 2007b; Victor, 2010; Victor et al., 2011). Therefore, this behaviour should be

mitigated in a trust algorithm.

Consider most trustworthy users: When inferring trust, only those users with the highest

trust values should be considered (Golbeck, 2005; Massa & Avesani, 2007a). Therefore, a

trust algorithm must have a means of filtering out those users who are distrusted or trusted

very little when inferring trust between users (Golbeck, 2005; Massa & Avesani, 2007a).

Controversial items: Within recommender systems, there is often what is known as

controversial items (Victor, 2010; Victor et al., 2011). These are items which have a mixture of

very strong supporters and very strong detractors, with no outright majority (Victor, 2010;

Victor et al., 2011). Such items need to have accurate and reliable recommendations

calculated by the trust algorithm for the relevant source user, despite the user opinion

differences (Victor, 2010; Victor et al., 2011).

Now that a list of requirements for a trust calculation and trust-based recommendation algorithm is

established, a number of state-of-the-art trust calculation and trust-based recommendation algorithms

from literature can be presented and reviewed. However, before detailing these algorithms, the next

section will define a reference scenario. This reference scenario is used as a reference in detailing the

practical implementation of each trust calculation and trust-based recommendation algorithm.

5.3 Reference scenario

The scenario which is the basis of reference for the defined trust calculation and trust-based

recommendation algorithms is represented in Figure 5.1 below:

Page 73: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

58

Adam

7

Ben

8

Craig

9

Darren

Franck

8 7

Graham

9

Ed

9 10

Harry

8

Sun City

Ivan

Adam - Movie Ratings

Apartheid Museum – 5 Stars

Gold Reef City – 4 Stars

Montecasino – 4 Stars

Voortrekker Monument – 3 Stars

Ivan - Movie Ratings

Orlando Towers – 5 Stars

Gold Reef City – 4 Stars

Montecasino – 3 Stars

Sun City – 3 Stars

Voortrekker Monument – 3 Stars

Darren - Movie Ratings

Union Buildings – 5 Stars

Sun City – 4 Stars

Church Square – 3 Stars

Lilliesleaf Farm – 3 Stars

Emmarentia Dam – 1 Star

Figure 5.1: Reference scenario for trust algorithms

Assume that a user Adam is a tourist visiting Johannesburg. Upon a particular day, he is deciding

whether he should go visit Sun City or not. As a result, he gets out his mobile device with an online

tourist guide application and queries it. Upon looking at the application, he notes that none of his

closest friends, Franck, Ben, or Ed has previously been there. A casual acquaintance he once met,

Darren, is the only one who has been there before. He gave Sun City 4 stars as well as a nice review.

The online tourist guide application now needs to consider the relevance of Darren’s rating of Sun

City with reference to Adam, and generate a recommendation indicating whether Adam should visit or

not.

It is in such situations where trust-based recommendation algorithms can aid in generating an

accurate and reliable recommendation for a user. However, before reviewing such algorithms, there

are a few things to note for future reference in terms of the diagram itself.

The edges between each user represent a trust valuation from 0 to 10, with 0 being total

distrust and 10 being total trust.

There is an additional user Ivan who does not belong to any trust path from the source

user, Adam. However, Ivan does share a number of common tourist recommendation

ratings with Adam. In addition, he has attributed a rating of 3 stars to Sun City.

In the next section, this scenario is used as a basis for defining and presenting a number of start-of-

the-art trust-based algorithms.

Page 74: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

59

5.4 State-of-the-art trust-based recommendation algorithms

In this section, five state of the art trust-based recommendation algorithms are reviewed and

evaluated, namely:

1. Trust-based weighted mean. This is a trust-based recommendation algorithm developed by

Golbeck (2005). In order to calculate trust, this algorithm makes use of the TidalTrust

algorithm, again developed by Golbeck (2005).

2. Trust-based collaborative filtering. This trust-based recommendation algorithm is

developed by Massa and Avesani (2007a). This algorithm uses the MoleTrust algorithm,

developed by Massa and Avesani (2007a), to calculate trust.

3. Trust-based filtering. The trust-based filtering algorithm is a trust calculation algorithm which

makes use of profile and item level trust (O’Donovan & Smyth, 2005). This algorithm was

developed by O’Donovan and Smyth (2005).

4. Structural trust inference. This trust calculation algorithm calculates trust between two

users by inferring trust from the recommender system, instead of leveraging explicit trust

(O’Doherty, 2012). This algorithm was developed by O’Doherty (2012).

5. EnsembleTrustCF. This trust-based recommendation algorithm was developed by Victor

(2010). It makes use of the MoleTrust algorithm to calculate trust (Victor, 2010).

The purpose of analysing each of these trust-based algorithms is to determine a suitable trust-based

recommendation algorithm for use within the trust implementation framework for the proposed group

recommender model. In this chapter, the suitability of an algorithm is determined by evaluating

whether each specific algorithm meets the requirements identified in Section 5.2.

With regards to the rest of this section, the analysis of the first three trust-based algorithms follows a

fixed structure with a number of subsections. The first subsection begins with a background

discussion of both the trust calculation and trust-based recommendation algorithms. This is then

followed by presenting the trust calculation algorithm used by the particular trust-based

recommendation algorithm. The third subsection follows with a presentation of the trust-based

recommendation algorithm. The fourth subsection presents an example application with reference to

the scenario outlaid in Section 5.3. Subsection five concludes with a summary and an evaluation as to

how the algorithm meets or does not meet the identified list of requirements.

The next two trust-based recommendation algorithms differ slightly in structure. The reasons for this

are that the structural trust inference algorithm is only a trust calculation algorithm and does not have

a trust-based recommendation algorithm component. Conversely, the EnsembleTrustCF algorithm is

a trust-based recommendation algorithm which makes use of the MoleTrust algorithm to calculate

trust. However, the MoleTrust algorithm is already defined in subsection 5.4.2 and so its repeated

definition is not required.

Page 75: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

60

Therefore, the structure of the subsections on the structural trust inference algorithm as well as the

EnsembleTrustCF algorithm begins with a background discussion of the algorithm. This proceeds into

the definition and presentation of the relevant algorithm. Next, an example application is given with

reference to the scenario in Section 5.3. The subsection then concludes with a summary and

evaluation of the trust-based algorithm with reference to the identified list of requirements.

The results in this chapter are summarised in the following section in order to motivate the best choice

of a trust calculation and trust-based recommendation algorithm for the proposed group recommender

model.

5.4.1 Trust-based weighted mean with TidalTrust

a) Background

TidalTrust is a trust algorithm created by Jennifer Golbeck (2005). The named is derived from

the fact that when trust is propagated from the source user, the trust values go out to the

target user and return to the source user, like a tide (Golbeck, 2005). The TidalTrust algorithm

has its basis in two foundational premises (Golbeck, 2005; Victor, 2010).

The shorter the trust path over which trust is propagated, the more accurate the trust

value (Golbeck, 2005; Victor, 2010).

Nodes linked by high trust values on a trust path generate a more accurate trust

value (Golbeck, 2005; Victor, 2010).

The TidalTrust algorithm caters for the first premise by ensuring that the shortest propagation

path between the source user and the target user is selected (Golbeck, 2005; Victor, 2010). In

the TidalTrust algorithm, the shortest propagation path is not bound by a fixed number

(Golbeck, 2005; Victor, 2010). For example, it is not bounded by saying that trust cannot be

propagated further than four nodes out (Golbeck, 2005; Victor, 2010). The reason for doing

this is that there is the potential for all propagation paths to be excluded (Golbeck, 2005;

Victor, 2010). Therefore, only the shortest ones are chosen.

The TidalTrust algorithm caters for the second premise by making use of a max variable

(Golbeck, 2005; Victor, 2010). This variable ensures that the only trust paths considered

when calculating trust are those having trust values above the max threshold (Golbeck, 2005;

Victor, 2010). Golbeck (2005) defines the max threshold variable as the maximum possible

trust value such that when applied as a threshold for each node, at least one propagation

path can be formed from the source user to the target user.

The TidalTrust algorithm is implemented in the trust-based weighted mean algorithm, used in

the development of Golbeck’s movie recommender system, FilmTrust (Golbeck & Hendler,

2006). FilmTrust (Golbeck & Hendler, 2006) is a system whereby users are able to attribute

Page 76: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

61

ratings to movies they have seen as well as connect with other users in the system that they

may know and trust or those with similar interests. The system then leverages the trust

connections of a user to calculate a particular recommended rating for those movies a user

has not yet seen (Golbeck, 2005; Golbeck & Hendler, 2006). The means by which trust is

leveraged in the system when issuing movie recommendations is by making use of the

TidalTrust algorithm.

b) TidalTrust: Formula

The formula for TidalTrust is listed and defined in Formula 5.1 below.

The TidalTrust algorithm works by starting at the source node and identifying each source

node’s trusted neighbours. The algorithm then moves to these trusted nodes and performs

the same process by considering their trusted neighbours. This process continues until the

source is reached (Golbeck, 2005).

In the midst of this process of reaching the target node, there is also another factor that is

taken into account. As each node is reached by the algorithm, the path strength is

considered. The path strength is defined as the minimum trust rating between the source

node and the current node. The path strength is continually calculated until the target node is

reached. Once reached, the final path strength is set as the minimum trust rating by which at

least one trust path can be found. This is what is defined as the minimum depth. This process

continues until all possible trust paths linking the source node to the target node have been

identified. It is at this point that the max variable or the maximum threshold is defined. This

value is the maximum of all the identified minimum depths. Therefore, all trust paths with a

minimum depth at or above the maximum threshold is considered by the TidalTrust algorithm

when propagating trust (Golbeck, 2005).

c) Trust-based weighted mean: Formula

The formula for the trust-based weighted mean algorithm is defined below.

Formula 5.1: TidalTrust (Golbeck, 2005; Victor, 2010)

𝑡𝑎,𝑠 = 𝑡𝑎,𝑢. 𝑡𝑢,𝑠𝑢 ∈ 𝑇𝑈 𝑎

𝑡𝑎,𝑢𝑢 ∈ 𝑇𝑈 𝑎

The trust value between a source user a and a target user s is defined as:

Where TU(a) represents all users trusted by source user a and whose trust values are greater

than a maximum threshold (Golbeck, 2005; Victor, 2010).

Page 77: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

62

In this trust-based recommendation formula, trust is used as a weighting applied to a target

user’s rating. Therefore, the greater the amount of trust, the more similar the rating is to the

target user’s (Golbeck, 2005; Golbeck & Hendler, 2006; Victor, 2010).

d) Example Application

In this example application, assume that Adam has an implementation of the trust-based

weighted mean algorithm on his online tourist guide system. The system needs to generate a

rating for Adam, indicating whether he should visit Sun City or not, by making use of the trust

path between Adam and Darren, the only one who has been to Sun City.

Firstly, because Darren is not part of Adam’s immediate trust network, a trust value between

them needs to be calculated by making use of the TidalTrust algorithm. Therefore, the first

step to be performed by the algorithm is to determine the shortest trust path or paths between

Adam and Darren. With reference to Figure 5.1, the system notes that all possible trust paths

are only a length of two i.e. there are two nodes between Adam and Darren. Therefore,

because these are the shortest possible nodes linking Adam and Darren, none have to be

excluded.

The second step is to define the max variable. As per the previous section, this value holds

the path strength. This is calculated as per the formula below, defined in Calculation 5.1. Note

that, for the purposes of this example, the variable , refers to the trust between Adam and

Franck. All explicit trust definitions are defined using similar notation.

Formula 5.2: Trust-based weighted mean (Golbeck, 2005; Golbeck & Hendler, 2006;

Victor, 2010)

𝑟𝑎,𝑠 = 𝑡𝑎,𝑢. 𝑟𝑢,𝑠𝑢 ∈ 𝑇𝑈 𝑎

𝑡𝑎,𝑢𝑢 ∈ 𝑇𝑈 𝑎

The recommended rating value between a target item a and a source user s is defined as

Where TU(a) represents all users trusted by a and whose trust values are greater than a

maximum threshold (Golbeck, 2005; Golbeck & Hendler, 2006; Victor, 2010).

Calculation 5.1: Max variable for TidalTrust algorithm

max = max min 𝑡𝑟𝑢𝑠𝑡_𝑝𝑎𝑡ℎ 𝑡𝐴,𝐹 , 𝑡𝐹,𝐺 ,min 𝑡𝑟𝑢𝑠𝑡_𝑝𝑎𝑡ℎ 𝑡𝐴,𝐵 , 𝑡𝐵,𝐶 , min 𝑡𝑟𝑢𝑠𝑡_𝑝𝑎𝑡ℎ 𝑡𝐴,𝐸 , 𝑡𝐸,𝐻

max = max min 𝑡𝑟𝑢𝑠𝑡_𝑝𝑎𝑡ℎ 8, 7 , min 𝑡𝑟𝑢𝑠𝑡_𝑝𝑎𝑡ℎ 7, 8 ,min 𝑡𝑟𝑢𝑠𝑡_𝑝𝑎𝑡ℎ 9, 10

max = max 7, 7,9

max = 9

Page 78: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

63

Because there is only one trust path that exists at strength of 9, this is the only path

considered by the TidalTrust algorithm. Now that the trust paths as well as the path strengths

have been defined, the system is able to calculate the trust value between source user Adam

and target user Darren. This calculation is defined in Calculation 5.2 below.

Therefore, the propagated trust rating between Adam and Darren, according to the TidalTrust

algorithm, is calculated to be 8. Since a trust value between Adam and Darren has been

calculated, a recommended rating for Sun City needs to be determined. Therefore, with

reference to Figure 5.1, the recommended rating for Adam for Sun City, represented by the

variable S, would be calculated as per Calculation 5.3 below. Note how the trust value in

Calculation 5.2 is used as the weighting input for the trust-based weighted mean algorithm.

Therefore, the final recommended tourist rating for Adam for Sun City is 4 stars.

e) Evaluation

The contribution as well as how the trust-based weighted mean algorithm meets the

requirements defined in Section 5.2 are summarised in Table 5.1 below. However, it is noted

that in the following chapter, an empirical comparison of all the trust-based recommendation

algorithms is discussed. Therefore, some of the inherent strengths and weaknesses of each

algorithm are then expounded upon further.

Calculation 5.2: TidalTrust implementation

𝑡𝐴,𝐷 =𝑡𝐴,𝐸 . 𝑡𝐸,𝐷

𝑡𝐴,𝐸

𝑡𝐴,𝐷 =9.

𝑡𝐸,𝐻 . 𝑡𝐻,𝐷

𝑡𝐸,𝐻

9

𝑡𝐴,𝐷 =9.

10.810

9

𝑡𝐴,𝐷 =9.

10.810

9

𝑡𝐴,𝐷 =9. 8

9

𝑡𝐴,𝐷 = 8

Calculation 5.3: Trust-based weighted mean implementation

𝑟𝐴,𝑆 = 𝑡𝐴,𝐷 . 𝑟𝐷,𝑆𝑢 ∈ 𝑇𝑈 𝐴

𝑡𝐴,𝐷𝑢 ∈ 𝑇𝑈 𝐴

𝑟𝐴,𝑆 = 8.4

8

𝑟𝐴,𝑆 = 4

Page 79: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

64

Algorithm Trust-based weighted mean with TidalTrust

Contributions

Foundational implementation of trust inference on trust networks based on

formally defined trust properties.

Accuracy is greater when trust paths are shorter and when those trust values

along a trust path are higher (Golbeck, 2005; Victor, 2010). As a result,

shorter trust paths are prioritised and only those paths which have a trust

value greater than a specified threshold, defined by the max variable, are

considered (Golbeck, 2005; Victor, 2010).

Dynamic depth and max calculation i.e. determined as trust is being

calculated (Golbeck, 2005; Victor, 2010).

Accuracy

Accuracy is catered for in this algorithm by implementing the following two

observations:

The shorter the trust path over which trust is propagated, the more accurate

the trust value (Golbeck, 2005; Victor, 2010). Implemented by only selecting

the shortest trust path between the source user and target user (Golbeck,

2005; Victor, 2010).

Nodes linked by high trust values on a trust path also generate a more

accurate trust value (Golbeck, 2005; Victor, 2010). Implemented by making

use of a max variable which ensures that only those trust paths which exceed

a threshold are considered (Golbeck, 2005; Victor, 2010).

Coverage

All possible trust paths linking the source user to the target user are considered

and no maximum depth is defined at the outset. In fact, the depth is dynamic and

initialised to the first identified trust path linking the two users (Golbeck, 2005).

Cold start

The only measure used to alleviate the cold start problem is by making use of

trust. As noted by Massa and Avesani (2007a), this states that new users are

more likely to have ratings generated for them by declaring just one trust

relationship, than by rating a single item. However, this does mean that the user

has to still identify at least one trust relationship.

Data sparsity

Again, by making use of trust, the issue of data sparsity is alleviated. This is

because trust relationships link one to many potential users, increasing the

likelihood of obtaining a recommendation (Massa & Avesani, 2007a).

Personalisation All trust path calculations are done from the perspective of the source user

(Golbeck, 2005). Therefore, trust is personalised (Golbeck, 2005).

Degradation Ensure that only the shortest trust paths are considered (Golbeck, 2005).

Trustworthy users

Catered for with the max variable, which ensures that only trust paths which have

a trust value above a certain threshold are considered (Golbeck, 2005).

Therefore, all lower valued trust paths are omitted (Golbeck, 2005).

Controversial items

The algorithm is able to rate controversial items because the trust path is

Page 80: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

65

personalised i.e. only those trust relationships from the user’s perspective are

considered. Therefore, this takes into account a user’s personal interests as trust

is a measure of similarity (Golbeck, 2005).

Table 5.1: Trust-based weighted mean with TidalTrust evaluation

5.4.2 Trust-based collaborative filtering with MoleTrust

a) Background

MoleTrust is a trust propagation algorithm created by Massa and Avesani (2007b). The

MoleTrust algorithm was applied and implemented in a recommender system, again

developed by Massa and Avesani (2007b), called the Trust Aware Recommender System

(TARS). The focus of this system was to combine the methodologies of collaborative filtering

and trust, via MoleTrust. By doing this, the potential effectiveness of trust in overcoming the

inherent shortcomings of collaborative filtering could be investigated. The main shortcomings

of collaborative filtering identified were the necessity for profile similarity as well as the ability

of a recommender system to generate recommendations for new system users (Massa &

Avesani, 2007a; Massa & Avesani, 2007b).

The first shortcoming relates to the requirements for profile similarity. In collaborative filtering

recommender systems, similarity is based on the how closely matched the rating histories are

between two users. Therefore, the first requirement is that user’s need to have rated many

recommendation items. The second requirement is that user’s need to have rated a number

of common recommendation items in order for similarity to be calculated. These requirements

can limit the number of quality recommendations generated by the system, especially for new

users who are required to rate multiple recommendation items before a recommendation can

be calculated for them (Massa & Avesani, 2007b).

In its implementation, MoleTrust is able to combat both of these shortcomings by making use

of trust (Massa & Avesani, 2007b). With regards to the first shortcoming, users do not have to

rate many recommendation items as profile similarity is not used to generate

recommendations. In terms of the second shortcoming, even by just having a new user

assign one trust valuation between themselves and another user, the system would have the

ability to calculate recommendations for that particular user (Massa & Avesani, 2007b).

In terms of the algorithm itself, MoleTrust is quite similar to the TidalTrust algorithm in its

implementation. However, there are a few things that make the algorithm distinct (Massa &

Avesani, 2007a). Firstly, the MoleTrust algorithm has what it titles a “tuneable trust

propagation horizon”. This is a fixed value that defines how far out the algorithm propagates

trust. For example, if the tuneable trust propagation horizon value is set to 3, then the

Page 81: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

66

algorithm only looks for nodes up to three levels out. As with Golbeck (2005), the logic of this

is to ensure that one does not go out too far and then have a resultant decrease in accuracy

and quality. Another difference is that the trust threshold i.e. the minimum trust rating by

which all nodes along a trust path have to be greater than, is predefined in the MoleTrust

algorithm.

The MoleTrust algorithm is implemented in the trust-based collaborative filtering algorithm.

This algorithm is one which mimics the traditional collaborative filtering algorithm (Victor,

2010). However, trust, instead of similarity, is used as the basis for calculating a

recommendation (Victor, 2010).

b) MoleTrust: Formula

The formula for MoleTrust is defined in Formula 5.3 below.

In short, the MoleTrust algorithm begins at the source user and then walks along the user’s

trust network, propagating trust at each step, in an attempt to find a trust path linking the

source user to the target user. The final trust rating that is output by the algorithm is the direct

trust assigned between the source user and their direct trust neighbours, normalised by the

trust values assigned between those users along the trust path to the target user (Massa &

Avesani, 2007a).

The way the MoleTrust algorithm does this is by making use of a two-step process.

In the first step of the algorithm, the aim is to transform the trust network of the

source user into a “directed acyclic graph”. A directed acyclic graph is formed by

only considering those trust relationships which flow outward from the source user

Therefore, all trust values, leading back to the source are ignored and not taken into

account by the algorithm. The main purpose in performing this stop is that it makes

the MoleTrust algorithm time and process efficient. This is because each node is only

Formula 5.3: MoleTrust (Massa & Avesani, 2007a; Victor, 2010)

𝑡𝑎,𝑠 = 𝑡𝑎,𝑢. 𝑡𝑢,𝑠𝑢 ∈ 𝑇𝑈 𝑎

𝑡𝑎,𝑢𝑢 ∈ 𝑇𝑈 𝑎

The trust value between a source user a and a target user s is defined as

Where TU(a) represents all users trusted by a and whose trust values are greater than a

maximum threshold, max, and who are within distance of trust horizon, th. If no such paths

exist between the two users, then 𝑡𝑎,𝑠 = 0. (Victor, 2010)

Page 82: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

67

visited once in the trust computation. Once this graph has been created, the second

step of the algorithm is performed (Massa & Avesani, 2007a).

In the second step of the algorithm, the algorithm walks the newly formed

graph in order to determine a trust value between the source user and the

target user (Massa & Avesani, 2007a). In this step, a similar process to that of the

TidalTrust algorithm is performed. All the direct neighbours, who have had trust

values explicitly assigned to them by the source user, are considered. These are all

nodes at distance one. In the next step, all the neighbours with explicitly assigned

trust values assigned by those nodes at distance one are considered. Therefore,

these are all nodes at distance two. A similar pattern is followed until the target user

is reached or until the trust propagation horizon is reached (Massa & Avesani,

2007a).

An important consideration to note in the second step is the use of the trust value threshold.

The use of this threshold means that only those users who have trust values above a defined

trust threshold value are referenced. This is to ensure that no untrustworthy users are

considered when determining the trust value between the source user and the target user

(Massa & Avesani, 2007a).

c) Formula: Trust-based collaborative filtering

The formula for trust-based collaborative filtering is defined in Formula 5.4 below.

Formula 5.4: Trust-based collaborative filtering (Victor, 2010)

𝑟𝑎,𝑢 = 𝑟 𝑎 + 𝑡𝑎,𝑢 𝑟𝑢,𝑖 − 𝑟 𝑢 𝑢 ∈ 𝑅𝑇

𝑡𝑎,𝑢𝑢 ∈ 𝑅𝑇

The recommended rating value between a target item a and a source user s is defined as

Where

𝑟𝑎 represents the mean of all items other than i rated by target user a.

𝑅𝑇 represents the subset of users who are trusted by target user a and who have

attributed a rating a rating to item i.

𝑡𝑎,𝑢 represents the trust rating between users a and u.

𝑟𝑢,𝑖 represents the rating attributed to item i by user u.

𝑟𝑢,𝑖 − 𝑟𝑢 represents the difference between the rating attributed to item i by user u

and the mean of all ratings attributed by user u.

Page 83: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

68

d) Example application

The example application is revisited again with reference to Figure 5.1 above. Assume that

the trust propagation horizon has been set to 3 and that the maximum threshold has been set

to 6. Therefore, the target user Darren falls in bounds of the trust propagation horizon and all

possible paths leading to Darren from source user Adam are considered as each level of trust

is greater than 6.

The trust values linking Adam to Darren need to be considered first. Therefore, the trust value

between Adam and Graham, represented by , , is calculated, in accordance with the

MoleTrust algorithm, as per Calculation 5.4 below.

The middle trust path between Adam and Craig, represented by , , and the bottom trust

path between Adam and Harry, represented by , , are calculated in a similar manner.

Therefore:

, = 8

, = 10

In order to calculate the trust value between Adam and Darren, in accordance with the

MoleTrust algorithm, it would be done as specified in Calculation 5.5 below.

Calculation 5.4: MoleTrust implementation 1a

𝑡 𝐴,𝐺 = 𝑡𝐴,𝐹 . 𝑡𝐹,𝐺𝑡𝐴,𝐹

𝑡 𝐴,𝐺 = 8.7

8

𝑡 𝐴,𝐺 = 7

Calculation 5.5: MoleTrust implementation 1b

𝑡𝐴,𝐷 = 𝑡 𝐴,𝐺 . 𝑡𝐺,𝐷 + 𝑡 𝐴,𝐶 . 𝑡𝐶,𝐷 + 𝑡 𝐴,𝐻 . 𝑡𝐻,𝐷

𝑡 𝐴,𝐺 + 𝑡 𝐴,𝐶 + 𝑡 𝐴,𝐻

𝑡𝐴,𝐷 = 7.9 + 8.9 + 10.8

7 + 8 + 10

𝑡𝐴,𝐷 = 63 + 72 + 80

25

𝑡𝐴,𝐷 = 215

25

𝑡𝐴,𝐷 = 215

25

𝑡𝐴,𝐷 = 8.6

Page 84: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

69

Therefore, the trust between Adam and Darren, when applied with the MoleTrust algorithm is

calculated to be 8.6.

Now that the trust value between Adam and Darren is determined, the recommended rating

for Sun City needs to be calculated. This is done by making use of the trust-based

collaborative filtering algorithm as presented in Calculation 5.6 below.

Therefore, with the use of trust-based collaborative filtering, Adam has a recommended rating

of 4.8 stars for Sun City returned to him.

e) Evaluation

The contribution and evaluation of the trust-based collaborative filtering algorithm is presented

in Table 5.2 below.

Algorithm Trust-based collaborative filtering

Contributions

Fixed trust propagation horizon and threshold values before the algorithm is

used to calculate recommendations (Massa & Avesani, 2007a; Massa &

Avesani, 2007b). This ensures that the algorithm does not spend unnecessary

resources calculating a trust path that is too far out and, as a result, less

relevant (Victor, 2010).

Trust values are only calculated once and do not have to be determined each

time a node is referenced (Victor, 2010).

Accuracy

Accuracy is catered for in this algorithm by following the observations of Golbeck

(2005). Whilst a fixed trust threshold value is introduced, the implementation of

selecting the shortest trust path is slightly different. Because it was identified that

shorter paths are more accurate, Massa and Avesani (2007a) have a tuneable trust

propagation horizon which defines the depth. By making this configurable and not

too big, the algorithm does not go too far out and results should be more accurate

(Massa & Avesani, 2007a; Victor, 2010).

Coverage Whilst increasing accuracy, the tuneable trust propagation horizon may affect

Calculation 5.6: Trust-based collaborative filtering implementation

𝑟𝐴,𝑆 = 𝑟 𝐴 + 𝑡𝐴,𝑢 𝑟𝑢,𝑆 − 𝑟 𝑢 𝑢 ∈ 𝑅𝑇

𝑡𝐴,𝑢𝑢 ∈ 𝑅𝑇

𝑟𝐴,𝑆 = 𝑟 𝐴 + 𝑡𝐴,𝐷 𝑟𝐷,𝑆 − 𝑟 𝐷

𝑡𝐴,𝐷

𝑟𝐴,𝑆 = 4 + 8.6 4 − 3.2

8.6

𝑟𝐴,𝑆 = 4 + 8.6 0.8

8.6

𝑟𝐴,𝑆 = 4.8

Page 85: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

70

coverage. This is because there may be trust paths linking the source user to the

target user outside of the set horizon, meaning that they are excluded (Massa &

Avesani, 2007a; Victor, 2010).

Cold start

The only measure used to alleviate the cold start problem is by making use of trust.

As noted by Massa and Avesani (2007a), this states that new users are more likely

to have ratings generated for them by declaring just one trust relationship, than by

rating a single item. However, this does mean that the user has to identify at least

one trust relationship.

Data sparsity

Again, by making use of trust, the issue of data sparsity is alleviated. This is

because trust relationships link one to many potential users, increasing the

likelihood of obtaining a recommendation (Massa & Avesani, 2007a).

Personalisation All trust path calculations are done from the perspective of the source user

(Golbeck, 2005). Therefore, trust is personalised (Golbeck, 2005).

Degradation Ensure that only the shortest trust paths are considered (Massa & Avesani, 2007a).

Trustworthy

users

As with TidalTrust, a predefined, fixed threshold variable is introduced. This ensures

that only trust paths which have a trust value above a certain threshold are

considered (Golbeck, 2005; Massa & Avesani, 2007a).

Controversial

items

The algorithm is able to rate controversial items because the trust path is

personalised i.e. only those trust relationships from the user’s perspective are

considered. Therefore, this takes into account a user’s interests as trust is a

measure of similarity (Golbeck, 2005).

Table 5.2: Trust-based collaborative filtering with MoleTrust evaluation

5.4.3 Trust-based filtering with profile and item level trust

a) Background

O’Donovan and Smyth (2005) make use of a different approach in comparison to the two

afore-mentioned algorithms. Whereas both TidalTrust (Golbeck, 2005) and MoleTrust (Massa

& Avesani, 2007a) require explicit trust values to be assigned, O’Donovan and Smyth (2005)

realise that this may not always be possible. Therefore, this is catered for by inferring trust

values between nodes (O’Donovan & Smyth, 2005).

In this algorithm, a user is determined to be trustworthy based upon their previous rating

behaviour as well as the accuracy of the recommendations that they have given about others

(O’Donovan & Smyth, 2005). Intuitively, if they have made accurate recommendations in the

past, then they are considered to be trustworthy in the future (O’Donovan & Smyth, 2005;

Victor, 2010).

Page 86: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

71

O’Donovan and Smyth (2005) determine the trustworthiness of a particular user

algorithmically by introducing two determinants of trustworthiness namely, profile and item

level trust. Profile level trust is defined as the trustworthiness of a particular user within the

overall community (O’Donovan & Smyth, 2005). Item level trust is more granular and is

defined as the trustworthiness of a particular user when making a recommendation for a

particular item (O’Donovan & Smyth, 2005).

In their algorithm, O’Donovan and Smyth (2005) distinguish between producers and

consumers. Producers are defined as the ones who contribute recommendation information

to the overall recommendation (O’Donovan & Smyth, 2005). A consumer is defined as the

one for whom such a recommendation is being calculated (O’Donovan & Smyth, 2005). In

such a system, there are typically multiple producers of information and each bit of the

recommendation information contributed by them is taken into account by the algorithm

(O’Donovan & Smyth, 2005).

An algorithm which implements profile and item level trust makes use of these trust metrics to

filter out untrusted users (O’Donovan & Smyth, 2005; Victor, 2010). Instead of using trust as a

weighting when determining a recommendation, all those users which have a profile or item

level trust below a predefined threshold are excluded from the algorithm (O’Donovan &

Smyth, 2005; Victor, 2010). Therefore, only those users which have a positive similarity, as

determined by the Pearson correlation coefficient, and who exceed the threshold for profile

and item level trust, are considered (O’Donovan & Smyth, 2005; Victor, 2010).

b) Profile and item level trust: Formula

Before the final formulae for profile and item level trust, as defined by (O’Donovan & Smyth,

2005), can be given, a number of background formulas need to be covered. The first of these

algorithms formally defines the correctness of a rating contributed by a producer. This

definition is given in Formula 5.5.

Fundamentally, therefore, the correctness of a recommendation as contributed by a producer

is determined by ensuring that the difference between the recommendation and the

Formula 5.5: Profile and item level trust – Correctness (O’Donovan & Smyth, 2005)

𝐶𝑜𝑟𝑟𝑒𝑐𝑡 𝑖,𝑝, 𝑐 ↔ 𝑝 𝑖 − 𝑐 𝑖 < 𝜖

The correctness of a recommendation for an item i by a producer p for a consumer c is defined

as:

Where 𝜖 represents the maximum allowed difference between the recommended rating, p(i),

and the actual rating, c(i).

Page 87: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

72

consumer’s actual recommendation is less than the maximum allowable difference. By using

the above definition as a base definition, a further two definitions can be defined. These are

presented in Formula 5.6.

Profile level trust can now be formally defined by making use of the above definitions. This is

presented in Formula 5.7 below.

Essentially, the profile level trust for a specific producer is the percentage of correct

recommendation contributions they have made versus the incorrect recommendations that

they have contributed (O’Donovan & Smyth, 2005). However, this gives quite a general level

of trust for a specific user (O’Donovan & Smyth, 2005). Nevertheless, it is plausible that a

producer may be more trustworthy in recommending specific items over other items

(O’Donovan & Smyth, 2005). This is catered for by O’Donovan and Smyth’s (2005) item level

trust algorithm, which is defined in Formula 5.8 below.

Formula 5.6: Profile and item level trust - User recommendation subsets (O’Donovan

& Smyth, 2005)

𝑅𝑒𝑐𝑆𝑒𝑡 𝑝 = { 𝑐 , 𝑖 , … , 𝑐𝑘 , 𝑖𝑘 }

𝐶𝑜𝑟𝑟𝑒𝑐𝑡𝑆𝑒𝑡 𝑝 = { 𝑐𝑘 , 𝑖𝑘 ∈ 𝑅𝑒𝑐𝑆𝑒𝑡 𝑝 :𝐶𝑜𝑟𝑟𝑒𝑐𝑡 𝑖𝑘 ,𝑝, 𝑐𝑘 }

The total recommendation set that a producer p has contributed for k consumers, c, and items,

i, are defined as:

The consequent subset of the RecSet(p) set for which producer p has contributed correct

recommendations is defined as:

Formula 5.7: Profile level trust (O’Donovan & Smyth, 2005)

𝑇𝑟𝑢𝑠𝑡𝑝 𝑝 = 𝐶𝑜𝑟𝑟𝑒𝑐𝑡𝑆𝑒𝑡 𝑝

𝑅𝑒𝑐𝑆𝑒𝑡 𝑝

The profile level trust, 𝑇𝑟𝑢𝑠𝑡𝑝, for a producer p is defined as:

Formula 5.8: Item level trust (O’Donovan & Smyth, 2005)

𝑇𝑟𝑢𝑠𝑡𝑖 𝑝, 𝑖 = { 𝑐𝑘 , 𝑖𝑘 ∈ 𝐶𝑜𝑟𝑟𝑒𝑐𝑡𝑆𝑒𝑡 𝑝 : 𝑖𝑘 = 𝑖}

{ 𝑐𝑘 , 𝑖𝑘 ∈ 𝑅𝑒𝑐𝑆𝑒𝑡 𝑝 : 𝑖𝑘 = 𝑖}

The item level trust, 𝑇𝑟𝑢𝑠𝑡𝑖, for a producer p and an item i is defined as:

Page 88: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

73

c) Trust-based filtering: Formula

The trust-based filtering algorithm is formally defined as per Formula 5.9 below.

Due to the fact that trust-based filtering makes use of the Pearson correlation coefficient, the

similarity measure is formally defined below in Formula 5.10.

d) Example application

Now that both the profile and item level trust measure as well as the trust-based filtering

algorithm have been defined, these can be applied within the scenario application as outlaid

in Section 5.3. Therefore, Adam requires a recommendation as to whether he should go visit

Sun City or not by having his online tourist guide application make use of trust-based filtering.

Formula 5.9: Trust-based filtering (O’Donovan & Smyth, 2005; Victor, 2010)

𝑟𝑎,𝑖 = 𝑟 𝑎 + 𝑤𝑎,𝑢 𝑟𝑢,𝑖 − 𝑟 𝑢 𝑢 ∈ 𝑅𝑇+

𝑤𝑎,𝑢𝑢 ∈ 𝑅𝑇+

The recommended rating value between a target item i and a source user a is defined as

Where

𝑟𝑎 represents the mean of all items other than i rated by target user a.

𝑅𝑇+ represents the union of those sets of users who have a positive Pearson

correlation coefficient (indicated by 𝑅+) and those users who have a profile or item

level trust value above a predefined threshold α indicated by 𝑅𝑇).

𝑤𝑎,𝑢 represents the Pearson correlation coefficient.

𝑟𝑢,𝑖 represents the rating attributed to item i by user u.

𝑟𝑢,𝑖 − 𝑟𝑢 represents the difference between the rating attributed to item i by user u

and the mean of all ratings attributed by user u.

Formula 5.10: Pearson correlation coefficient (O’Donovan & Smyth, 2005; Victor,

2010)

𝑤𝑎,𝑏 = 𝑟𝑏,𝑗 − 𝑟𝑏 . 𝑟𝑎,𝑗 − 𝑟𝑎 𝑛𝑗=

𝑟𝑏,𝑗 − 𝑟𝑏 𝑛

𝑗= . 𝑟𝑎,𝑗 − 𝑟𝑎 𝑛

𝑗=

For a source user a and a target user b, the Pearson correlation coefficient between them is

defined as

Where n represents the total number of items j rated in common.

Page 89: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

74

The first task of the algorithm is to determine which users to consider for the trust-based

recommendation. As defined in the previous section, there are three requirements:

Firstly, the users must have rated the item previously.

Secondly, the users must have a positive Pearson correlation coefficient.

Thirdly, the users must have a profile or item level trust rating that exceeds a

predefined threshold.

Each of these three requirements is considered in this example application.

The first requirement is straightforward from Figure 5.1. From this trust network, it can be

identified that the only users to have rated Sun City are Darren and Ivan. Therefore, these are

the only two users considered.

The second requirement is for Darren and Ivan to have a positive Pearson correlation

coefficient. However, upon analysing the rating history of Darren, it is noted that he has no

common rating items between himself and Adam. Therefore, he is excluded from being

considered. This leaves Ivan as the only possible user to be considered. As a result, the

Pearson correlation coefficient calculation between Adam and Ivan is presented in Calculation

5.7 below. For the purposes of the calculation, GRC refers to “Gold Reef City”, MC refers to

“Montecasino”, and “VM” refers to the “Voortrekker Monument”.

Due to the fact that the Pearson correlation coefficient is greater than 0, it can be concluded

that Adam and Ivan share quite a strong similarity.

Calculation 5.7: Pearson correlation coefficient implementation

𝑤𝐴,𝐼 = 𝑟𝐼,𝑗 − 𝑟𝐼 . 𝑟𝐴,𝑗 − 𝑟𝐴 𝑛𝑗=

𝑟𝐼,𝑗 − 𝑟𝐼 𝑛

𝑗= . 𝑟𝐴,𝑗 − 𝑟𝐴 𝑛

𝑗=

𝑤𝐴,𝐼 = 𝑟𝐼,𝐺𝑅𝐶 − 𝑟𝐼 . 𝑟𝐴,𝐺𝑅𝐶 − 𝑟𝐴 + 𝑟𝐼,𝑀𝐶 − 𝑟𝐼 . 𝑟𝐴,𝑀𝐶 − 𝑟𝐴 + 𝑟𝐼,𝑉𝑀 − 𝑟𝐼 . 𝑟𝐴,𝑉𝑀 − 𝑟𝐴

𝑟𝐼,𝐺𝑅𝐶 − 𝑟𝐼 + 𝑟𝐼,𝑀𝐶 − 𝑟𝐼

+ 𝑟𝐼,𝑉𝑀 − 𝑟𝐼

. 𝑟𝐴,𝐺𝑅𝐶 − 𝑟𝐴

+ 𝑟𝐴,𝑀𝐶 − 𝑟𝐴

+ 𝑟𝐴,𝑉𝑀 − 𝑟𝐴

𝑤𝐴,𝐼 = 4− 3.6 . 4 − 4 + 3 − 3.6 . 4 − 4 + 3− 3.6 . 3 − 4

4− 3.6 + 3− 3.6 + 3 − 3.6 . 4 − 4 + 4 − 4 + 3− 4

𝑤𝐴,𝐼 = 0.4 . 0 + −0.6 . 0 + −0.6 . −1

0.4 + −0.6 + −0.6 . 0 + 0 + −1

𝑤𝐴,𝐼 = 0.6

0.16+ 0.36+ 0.36 . 1

𝑤𝐴,𝐼 = 0.6

0.88 . 1

𝑤𝐴,𝐼 = 0.6

0.94

𝑤𝐴,𝐼 = 0.64

Page 90: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

75

The third and last consideration is whether Ivan is considered to be a trustworthy user by

making use of the profile and item level trust measures. For the purposes of this application,

the item level trust measure is considered as it is a more granular algorithm and therefore, a

more personalised one. Therefore, assume that Ivan has produced fifty recommendations for

Sun City. Out of these fifty times, Ivan has been within a ten percent error margin thirty five

times. Consequently, by using item level trust, the trust value between Adam and Ivan is

calculated as per Calculation 5.8 below.

The calculated trust value of 0.7 reveals that Ivan is quite a trustworthy user when it comes to

his recommendations for Sun City. Therefore, Ivan has met all three requirements in terms of

user consideration. Now that this has been determined, the trust-based filtering algorithm can

be used to generate a recommendation for Adam. The means by which this is achieved by

the trust-based filtering algorithm is represented in Calculation 5.9 below.

Therefore, the final rating, as determined by trust-based filtering for Adam is 3.4 stars out of 5.

e) Evaluation

The summary and contribution of the trust-based filtering algorithm, with item and profile level

trust, is summarised in Table 5.3 below.

Calculation 5.8: Item level trust implementation

𝑇𝑟𝑢𝑠𝑡𝑖 𝐼, 𝑆 = { 𝑐𝑘 ,𝑆𝑘 ∈ 𝐶𝑜𝑟𝑟𝑒𝑐𝑡𝑆𝑒𝑡 𝐼 : 𝑆𝑘 = 𝑆}

{ 𝑐𝑘 ,𝑆𝑘 ∈ 𝑅𝑒𝑐𝑆𝑒𝑡 𝐼 : 𝑆𝑘 = 𝑆}

𝑇𝑟𝑢𝑠𝑡𝑖 𝐼, 𝑆 = 35

50

𝑇𝑟𝑢𝑠𝑡𝑖 𝐼,𝑆 = 0.7

Calculation 5.9: Trust-based filtering implementation

𝑟𝐴,𝑆 = 𝑟 𝐴 + 𝑤𝐴,𝑢 𝑟𝑢,𝑆 − 𝑟 𝑢 𝑢 ∈ 𝑅𝑇+

𝑤𝐴,𝑢𝑢 ∈ 𝑅𝑇+

𝑟𝐴,𝑆 = 4 + 𝑤𝐴,𝐼 𝑟𝐼,𝑆 − 𝑟 𝐼

𝑤𝐴,𝐼

𝑟𝐴,𝑆 = 4 + 0.64 3 − 3.6

0.64

𝑟𝐴,𝑆 = 4+ −0.384

0.64

𝑟𝐴,𝑆 = 4 + −0.384

0.64

𝑟𝐴,𝑆 = 4 − 0.6

𝑟𝐴,𝑆 = 3.4

Page 91: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

76

Algorithm Trust-based filtering with profile and item level trust

Contributions

Ability to calculate trust without explicit trust statements being issued

(O’Donovan & Smyth, 2005).

A global and local method for determining trust. The global method is by

making use of profile level trust and the local method is by making use of

item level trust (O’Donovan & Smyth, 2005).

Provides a different approach to trust whereby users are filtered out to

ensure that only trusted and reliable ones are selected (O’Donovan & Smyth,

2005).

Accuracy

Accuracy is determined by evaluating the past rating behaviour of a particular

user and how often they have contributed a recommendation within a given error

margin. The greater the number of correct ratings produced, the more accurate a

rating is (O’Donovan & Smyth, 2005).

Coverage

Coverage was not specifically calculated in O’Donovan and Smyth’s (2005)

evaluation of their own algorithm. However, it is noted in other papers (O’Doherty,

2012; Victor, 2010) that because the algorithm only considers those users who

are trusted and have a positive Pearson correlation coefficient, many users are

excluded. This means the possibility of having a recommendation generated for a

user is lower (O’Doherty, 2012; Victor, 2010).

Cold start

Because of the above point on coverage, user’s would have to have at least rated

two items in order to attain a positive Pearson correlation coefficient (Victor,

2010). Therefore, there is no specific means to cater for cold start users.

Data sparsity The issue of data sparsity would not be alleviated for the same reasons as

presented in the previous two points.

Personalisation The algorithm is somewhat personalised when the item level trust algorithm is

used to determine trustworthiness.

Degradation Not applicable for this algorithm as trust paths are not used.

Trustworthy users

Only those users with a positive Pearson correlation coefficient as well as an item

level trust above a specific threshold are considered by the algorithm (O’Donovan

& Smyth, 2005; Victor, 2010).

Controversial items

Controversial items can be calculated when the more personalised, item level

trust algorithm is used within the trust-based filtering algorithm.

Table 5.3: Trust-based filtering with profile and item level trust evaluation

Page 92: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

77

5.4.4 Structural trust inference algorithm

a) Background

A trust algorithm developed by O’Doherty (2012), described as the structural trust inference

algorithm, is an algorithm based upon the use of a bipartite graph. The motivation for making

use of this methodology is that in real world applications, there is often a lack of facilities for

users to provide direct trust information between themselves (O’Doherty, 2012). Therefore, by

making use of a bipartite graph, trust can be inferred without the need for direct trust

statements to be made between system users (O’Doherty, 2012).

In order to establish a bipartite graph there has to be a separation of system users from the

recommendation items. Upon completion of this process there is a graph of users and a list of

items (O’Doherty, 2012). Thereafter, links are drawn between each user and the items that

they have rated (O’Doherty, 2012). This is used as the basis to determine trust between users

in the system (O’Doherty, 2012).

Another consideration proposed by O’Doherty (2012) in the structural trust inference

algorithm is the notion of popularity within a recommender system. In this research, it was

shown that the greater the number of ratings attributed to a particular item, the less quality

and reliable trust information can be derived (O’Doherty, 2012). The reason for this is that a

user is more likely to have attributed a rating to a popular item because of its popularity. This

makes it more difficult to determine any unique user characteristics or similarity patterns

between two users. However, the opposite was also found to be true. For those items

containing fewer ratings, truer determinants of similarity and trust can be identified

(O’Doherty, 2012). Therefore, popularity is additionally used as a factor when inferring trust

between two users (O’Doherty, 2012).

With regards to the implementation of popularity within the structural trust inference algorithm,

two factors are considered: relative diversity and structural similarity (O’Doherty, 2012).

Relative diversity is defined as those users who can be reached within two hops (O’Doherty,

2012). Two hops means that they have rated the same item as another user. Structural

similarity is implemented by making use of the Jaccard index (O’Doherty, 2012). This is a

distance factor implemented in the trust algorithm (O’Doherty, 2012). Both of these factors are

detailed further in this section.

Page 93: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

78

b) Formula

The formula for structural trust inference is defined below in Formula 5.11.

From Formula 5.11, it is noted that two factors need to be defined: the Jaccard index and the

popularity measure. The Jaccard index is a common index used within statistics to determine

the similarity as well as the differences between sets of data (O’Doherty, 2012). The Jaccard

index is defined in Formula 5.12 below.

Secondly, the popularity measure is defined (O’Doherty, 2012). This popularity measure is

one which takes into consideration the findings of their research concerning popularity

(O’Doherty, 2012). Therefore, items which have many ratings are penalised and items which

have fewer ratings are rewarded (O’Doherty, 2012). The popularity measure is defined in

Formula 5.13 below.

Formula 5.11: Structural trust inference algorithm (O’Doherty, 2012)

For a source user u and a target user v, trust can be determined between them as follows:

𝑇𝑟𝑢𝑠𝑡 𝑢, 𝑣 = 𝛼 + 𝛽𝐽 𝑢,𝑣 + 𝛾 1 − 𝐷 𝑖 𝑖 𝜖 𝑆𝐼𝑖

𝑆𝐼 )

Where

The constant 𝛼 represents the probability that user u trusts user v

The constant 𝛽 and 𝛾 represent tuning factors when determining trust between u and

v

The sum of 𝛼 + 𝛽 + 𝛾 = 1

J(u,v) represents the Jaccard Index

D(i) represents the popularity measure of a specific item

SI represents the set of shared items rated by both user u and user v

Formula 5.12: Jaccard index (O’Doherty, 2012)

𝐽 𝑢, 𝑣 = 𝑁𝑢 𝑁𝑣

𝑁𝑢 𝑁𝑣

For a source user u and a target user v, the Jaccard index is defined as follows:

Where 𝑁𝑢 represents the set of users reachable within two hops from user u and 𝑁𝑣 represents

the set of users reachable within two hops from user v.

Page 94: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

79

c) Example application

With reference to the defined scenario in Section 5.3, the first step which would take place in

the structural trust inference algorithm is to create two separate sets (O’Doherty, 2012). One

set contains a set of system users, while the other set contains a list of items, in this case,

tourist attractions (O’Doherty, 2012). Thereafter, the set of users are linked with the relevant

items which they have rated (O’Doherty, 2012). Once this step has been completed, the two

sets of data should appear analogous to Figure 5.2 below.

S

Adam

Ben

Craig

Darren

Ed

Franck

Graham

Harry

Ivan

Set of Users U

S

Set of Items I

Voortrekker Monument

Apartheid Musuem

Emmarentia Dam

Montecasino

Union Buildings

Gold Reef City

Lilliesleaf Farm

Sun City

Church Square

Orlando Towers

Figure 5.2: Structural trust inference algorithm - User and item sets

Formula 5.13: Popularity measure (O’Doherty, 2012)

𝐷 𝑖 = 2

1 + 𝑒 −deg 𝑖 𝜎 + 𝜎 − 1

For an item i, the popularity measure is defined as follows:

Where deg(i) refers to the indegree of item i, which is the number of inbound ratings from

system users, and 𝜎 represents a constant factor defined between 0 and 1. The final calculated

popularity measure D(i) will also be within the bounds of 0 to 1.

Page 95: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

80

Once this step has been completed, the Jaccard index is calculated. In order to determine the

Jaccard index, two sets of data needs to be created. The one set of data contains all those

users reachable by two hops around a user, u, and the other set of data contains the set of

users around a user, v. For the purposes of this example, the Jaccard index between Darren

and Ivan is considered.

A two-hop neighbourhood is created by analysing those items which connect users.

Therefore, with reference to Figure 5.2, if just Sun City were considered, Darren and Ivan

would form part of the same neighbourhood because they have both rated the tourist

destination. Therefore, the Jaccard index between Darren and Ivan is calculated as per

Calculation 5.10 below.

The next step in determining the trust between Darren and Ivan is to calculate the popularity

measure between them. This is based on the set of shared items, SI. Therefore, the shared

items set is first formed between Darren and Ivan. This shared set is represented below.

= { }

Therefore, to determine the popularity measure of Sun City, the following calculation is

performed.

= 2

1 + −deg + − 1

Calculation 5.10: Jaccard index implementation

𝐽 𝐷, 𝐼 = 𝑁𝐷 𝑁𝐼

𝑁𝐷 𝑁𝐼

𝑁𝐷 𝑁𝐼 = {𝐴𝑑𝑎𝑚,𝐵𝑒𝑛,𝐷𝑎𝑟𝑟𝑒𝑛,𝐸𝑑,𝐹𝑟𝑎𝑛𝑐𝑘,𝐺𝑟𝑎ℎ𝑎𝑚,𝐻𝑎𝑟𝑟𝑦, 𝐼𝑣𝑎𝑛}

𝐽 𝐷, 𝐼 = 𝑁𝐷 𝑁𝐼

𝑁𝐷 𝑁𝐼

𝐽 𝐷, 𝐼 = 2

8

𝐽 𝐷, 𝐼 = 0.25

Where

𝑁𝐷 = {𝐵𝑒𝑛,𝐹𝑟𝑎𝑛𝑐𝑘,𝐺𝑟𝑎ℎ𝑎𝑚,𝐻𝑎𝑟𝑟𝑦, 𝐼𝑣𝑎𝑛}

𝑁𝐼 = {𝐴𝑑𝑎𝑚,𝐷𝑎𝑟𝑟𝑒𝑛,𝐸𝑑,𝐺𝑟𝑎ℎ𝑎𝑚,𝐻𝑎𝑟𝑟𝑦}

Therefore,

𝑁𝐷 𝑁𝐼 = {Graham,Harry}

In conclusion,

Page 96: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

81

In O’Doherty (2012), all the constants are pre-calibrated. As a result, the same calibration is

now used for this scenario. Therefore, is calibrated to be 0.333 recurring. Additionally, as

previously mentioned, deg(S) is the indegree of Sun City. This means that it is the sum of all

users which have attributed a rating to the tourist attraction. Therefore:

de = 2

Now that these other factors have been calculated, the popularity measure for the movie can

be completed. This is shown in Calculation 5.11 below.

D(M) = 0 is in correlation with a note by O’Doherty (2012) stating that the minimum of 0 is

always applied when the item concerned has only been rated by the two users for which trust

is being calculated.

Lastly, the trust between Darren and Ivan can be calculated with reference to the structural

trust inference algorithm.

, = + , + 1 −

As noted with the previous constant, , constants , , and have also been pre-calibrated in

O’Doherty’s (2012) research. Therefore,

= 0.1

= 0.4

= 0.5

The structural trust inference calculation is, consequently, presented in Calculation 5.12

below.

Calculation 5.11: Popularity measure implementation

𝐷 𝑆 = 2

1 + 𝑒 −deg 𝑆 𝜎 + 𝜎 − 1

𝐷 𝑆 = 2

1 + 𝑒 − 0.333+ 0.333 − 1

𝐷 𝑆 = 2

1 + 𝑒 − 0.333+ 0.333 − 1

𝐷 𝑆 = 2

1 + 𝑒 0 − 1

𝐷 𝑆 = 2

2− 1

𝐷 𝑆 = 0

Page 97: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

82

Therefore, the similarity measure between users Darren and Ivan, in consideration of the

structural trust inference algorithm is calculated to be 0.7.

d) Evaluation

The summary and contribution of the structural trust influence algorithm is summarised in

Table 5.4 below.

Algorithm Structural trust inference

Contribution Ability to infer trust relationships in systems whereby there are no

explicit trust statements given (O’Doherty, 2012). This is because in

many real-world applications, trust is not commonly used (O’Doherty,

2012).

The implementation of the popularity measure giving greater weighting

to items which have a lesser indegree (O’Doherty, 2012).

Accuracy Accuracy is determined through the various factors of the algorithm. Firstly,

the Jaccard Index determines user similarity between two users by

identifying those users within two hops of the users concerned. This ensures

that only those users available within the close neighbourhood of the users

for which trust is being determined is considered. Secondly, the popularity

measure weights more distinct rating data over more popular rating data.

This provides some measure of personalisation. Thirdly, constants provide

the user with the ability to calibrate the algorithm for the relevant dataset

(O’Doherty, 2012).

Coverage Coverage is determined by identifying those users within two hops of the

users for which trust is being determined. Such information is more readily

available and easy to analyse within a bipartite graph (O’Doherty, 2012).

Cold start There is no specific means to cater for cold start users. In order for trust to

be determined for a cold start user, they must have rated at least one item to

have the potential to be a part of the bipartite graph (O’Doherty, 2012).

Data sparsity Because identifying users within a two-hop neighbourhood is more readily

Calculation 5.12: Structural trust inference implementation

𝑇𝑟𝑢𝑠𝑡 𝐷, 𝐼 = 𝛼 + 𝛽𝐽 𝐷, 𝐼 + 𝛾 1 −𝐷 𝑀

𝑆𝐼

𝑇𝑟𝑢𝑠𝑡 𝐷, 𝐼 = 0.1 + 0.4 0.25 + 0.5 1 −0

1

𝑇𝑟𝑢𝑠𝑡 𝐷, 𝐼 = 0.1 + 0.1 + 0.5

𝑇𝑟𝑢𝑠𝑡 𝐷, 𝐼 = 0.7

Page 98: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

83

available and easy to identify, the algorithm can add value in datasets with

sparse data

Personalisation The algorithm is focused around the two users for which trust is being

considered. This is done in two ways. Firstly, by identifying those users

within two hops. Secondly, by making use of the popularity measure

(O’Doherty, 2012).

Degradation Only users within two hops are considered, therefore degradation is not

particularly relevant for this algorithm.

Trustworthy users Again, the user base is determined by identifying the two hop

neighbourhoods as well as the Jaccard index for similarity between the two

users for which trust is being determined.

Controversial items Controversial items can be recommended on as the algorithm is

personalised.

Table 5.4: Structural trust influence evaluation

5.4.5 EnsembleTrustCF

a) Background

EnsembleTrustCF is an algorithm developed by Patricia Victor as part of her dissertation to

the University of Ghent in Belgium (Victor, 2010). This algorithm combines both the classic

algorithm for collaborative filtering as well as a trust-based means for collaborative filtering

(Victor, 2010).

The purpose of combining these algorithms is to increase the potential of having some rating

returned for a particular user (Victor, 2010). Therefore, while trust or propagated trust is

preferred as the basis of relationship between a source user and a target user, the Pearson

correlation coefficient can be used as a means of determining similarity between two users if

there is no trust data available (Victor, 2010). The result of doing this is an attempt to

maximise both accuracy, with the use of trust, and coverage with the use of collaborative

filtering when trust is not available (Victor, 2010).

b) Formula

The formula for the EnsembleTrustCF (Victor, 2010) algorithm is defined below in Formula

5.14.

Page 99: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

84

As previously mentioned in the background, the purpose of this algorithm is to combine both

the strengths of trust-based and traditional collaborative filtering algorithms (Victor, 2010).

The purpose of doing this is to ensure that there is a greater possibility for obtaining a rating

for a particular user, which is both reliable and accurate (Victor, 2010).

In terms of functionality, the EnsembleTrustCF algorithm determines those trusted users who

have rated a specific recommendation item as well as those users for whom a trust path can

be formed. Once this has been completed, the calculated trust value is used as a weighting to

determine a weighted rating value. Thereafter, all those users with whom the source user has

a positive similarity measure, identified by making use of the Pearson correlation coefficient,

and who have not already been considered in the subset of trusted users are considered in

determining a rating. Each user included as part of this subset has their rating calculations

weighted by the positive Pearson correlation coefficient value. These values are finally

merged to produce a predicted rating value for the source user. In the next section, this

formula is applied with reference to an example application.

c) Example application

In this section, the EnsembleTrustCF algorithm is applied with reference to the scenario as

outlaid in Section 5.3. In the scenario, source user Adam desires to obtain a recommended

rating for Sun City. However, Sun City has only been rated by two users, namely Darren and

Formula 5.14: EnsembleTrustCF (Victor, 2010)

𝑝𝑎,𝑖 = 𝑟𝑎 + 𝑡𝑎,𝑢 𝑟𝑢,𝑖 − 𝑟𝑢 + 𝑤𝑎,𝑢 𝑟𝑢,𝑖 − 𝑟𝑢 𝑢 ∈ 𝑅+\𝑅𝑇𝑢 ∈ 𝑅𝑇

𝑡𝑎,𝑢 + 𝑤𝑎,𝑢𝑢 ∈ 𝑅+\𝑅𝑇𝑢 ∈ 𝑅𝑇

For an item i and a target user a, the predicted rating is determined as follows

Where

𝑟𝑎 represents the mean of all items other than i rated by target user a.

𝑅𝑇 represents the subset of users who are trusted by target user a and who have

attributed a rating a rating to item i.

𝑡𝑎,𝑢 represents the trust rating between users a and u.

𝑟𝑢,𝑖 represents the rating attributed to item i by user u.

𝑤𝑎,𝑢 represents the Pearson Correlation Coefficient correlation between users a

and u.

𝑅+\𝑅𝑇 represents the subset of users with a positive Pearson Correlation

Coefficient between them and user a as well as having attributed a rating to item i.

Additionally, this excludes the subset of users trusted by target user a. This is to

ensure that the trust measure is given preference over the correlation measure.

Page 100: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

85

Ivan, of which only Darren was linked to a trust path. This section outlays the benefits of

EnsembleTrustCF as both users on the trust path and outside of the trust path are

considered.

As discussed in the previous section, the EnsembleTrustCF algorithm constitutes two

different algorithms. The first is the collaborative filtering algorithm and the second is the trust-

based algorithm. Therefore, collaborative filtering calculations are discussed first and the

trust-based algorithm calculations thereafter.

When considering the collaborative filtering part of the EnsembleTrustCF algorithm, it is noted

that it only includes those users who have a positive Pearson correlation coefficient as well as

those who rated the relevant item. Additionally, these users are excluded from the trust

calculation. In this scenario, there is only one person that potentially meets this criteria and

that is Ivan. Therefore, the Pearson correlation coefficient is calculated in order to determine

his similarity with Adam. As per Calculation 5.7, this was calculated to be 0.64. Because this

value is positive, Ivan is considered in the collaborative filtering part of the algorithm in

determining a predicted rating for Sun City.

Next, the collaborative filtering algorithm is applied to calculate a predicted rating for Adam for

Sun City. This implementation is shown below in Calculation 5.13.

Therefore, in accordance with collaborative filtering, the predicated rating for Adam for Sun

City is 3.41 stars.

Next, the trust-based part of the algorithm is discussed. As previously mentioned, the trust-

based algorithm which has been implemented by Victor (2010) is trust-based collaborative

Calculation 5.13: Collaborative Filtering implementation

𝑝𝐴 ,𝑆 = 𝑟𝐴 + 𝑤𝐴,𝑢 𝑟𝑢,𝑆 − 𝑟𝑢 𝑢 ∈ 𝑅

𝑤𝐴,𝑢𝑢 ∈ 𝑅

𝑝𝐴,𝑆 = 4 + 𝑤𝐴,𝐼 𝑟𝐼,𝑆 − 𝑟𝐼

𝑤𝐴,𝐼

𝑝𝐴,𝑆 = 4 + 0.64 3 − 3.6

0.64

𝑝𝐴,𝑆 = 4 + −0.38

0.64

𝑝𝐴,𝑆 = 4 + −0.38

0.64

𝑝𝐴 ,𝑆 = 4 − 0.59

𝑝𝐴,𝑆 = 3.41

Page 101: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

86

filtering. Therefore, in line with her implementation, a trust value of 8.6, as determined in

Calculation 5.5 is reused. However, for the purposes of the EnsembleTrustCF algorithm, this

value is divided by 10. The reason for this is that the trust value and the similarity value need

to be in the same format to ensure consistency as well as correctness in calculation.

Therefore, because the similarity measure is bound between 0 and 10, the trust value follows

the same definition.

The predicted rating for Sun City, as determined by the EnsembleTrustCF algorithm, is

calculated as per Calculation 5.14 below.

Therefore, when making use of the EnsembleTrustCF algorithm, the calculated rating for

Adam for Sun City is 4.21.

d) Evaluation

The summary and contribution of the EnsembleTrustCF algorithm is summarised in Table 5.5

below.

Algorithm EnsembleTrustCF

Contribution Gives the user the best possible chance of having an accurate and reliable

recommendation given to them by combining trust-based and collaborative

filtering algorithms together (Victor, 2010).

The trust-based algorithm is given greater priority over collaborative filtering

(Victor, 2010).

Limited trade-off between accuracy and coverage (Victor, 2010).

Accuracy The algorithm caters for accuracy by prioritising trust over the Pearson correlation

coefficient when determining similarity (Victor, 2010).

Coverage Coverage is increased as the algorithm not only considers trust relationships but

also those users for whom a Pearson correlation coefficient can be determined

(Victor, 2010).

Cold start The only measure used to alleviate the cold start problem is by making use of

Calculation 5.14: EnsembleTrustCF implementation

𝑝𝐴,𝑆 = 𝑟𝐴 + 𝑡𝐴,𝑢 𝑟𝑢,𝑆 − 𝑟𝑢 + 𝑤𝐴,𝑢 𝑟𝑢,𝑆 − 𝑟𝑢 𝑢 ∈ 𝑅+\𝑅𝑇𝑢 ∈ 𝑅𝑇

𝑡𝐴,𝑢 + 𝑤𝐴,𝑢𝑢 ∈ 𝑅+\𝑅𝑇𝑢 ∈ 𝑅𝑇

𝑝𝐴,𝑆 = 4 + 0.86 4 − 3.2 + 0.64 3 − 3.6

0.86 + 0.64

𝑝𝐴,𝑆 = 4 + 0.69 − 0.38

1.5

𝑝𝐴 ,𝑆 = 4 + 0.21

𝑝𝐴,𝑆 = 4.21

Page 102: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

87

trust. As noted by Massa and Avesani (2007a), this states that new users are

more likely to have ratings generated for them by declaring just one trust

relationship, than by rating a single item. However, this does mean that the user

has to identify at least one trust relationship.

Data sparsity Again, by making use of trust, the issue of data sparsity is alleviated. This is

because trust relationships link one to many potential users, increasing the

likelihood of obtaining a recommendation (Massa & Avesani, 2007a).

Personalisation The algorithm is personalised as it is based around the source user (Victor, 2010).

Degradation Ensure that only the shortest trust paths are considered (Victor, 2010).

Trustworthy users

Contains those users trusted by the source users, who have additionally attributed

a rating to the item concerned as well as those users who have a positive Pearson

correlation coefficient between them and who have rated the concerned item. The

latter set, however, excludes the former set of trusted users as trust is prioritised

when calculating an item (Victor, 2010).

Controversial items

Controversial items can be evaluated as the algorithm is personalised.

Table 5.5: EnsembleTrustCF evaluation

5.5 Analysis of results

A summary of the results for each of the trust-based algorithms covered in this section is presented in

relation to the identified set of requirements in Table 5.6 below. The performance of each trust-based

algorithm in meeting the defined requirements is defined. Performance is determined on a scale using

the following indicators: excellent, good, compliant, and underperforms. In those cases where there

are requirements not relevant to the relevant algorithm, N/A is used.

Page 103: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

88

Trust-based weighted mean

Trust-based collaborative filtering

Trust-based filtering

Structural trust inference

EnsembleTrustCF

Accuracy * * * * * * * * * * * * * * * * *

Coverage * * * * * * * * * * * * * * * *

Cold start * * * * * * * * * * * *

Data sparsity * * * * * * * * * * * * *

Personalisation * * * * * * * * * * * * * * * * *

Degradation * * * * * * * * N/A N/A * * * *

Trustworthy users

* * * * * * * * * * * * * * * * * *

Controversial items

* * * * * * * * * * * * * * * * * *

* Underperforms

* * Complies

* * * Good

* * * * Excellent

Table 5.6: Summary of trust-based algorithm evaluation results

With reference to Table 5.6, there are a number of observations made.

The most suitable trust-based algorithm for the trust implementation framework proposed for

the group recommender model is the EnsembleTrustCF algorithm. There are a number of

reasons for this.

o The combination of the strengths of both trust and similarity result in a trust-based

algorithm able to meet each of the requirements best in comparison to the other trust-

based algorithms.

o The implementation of the MoleTrust algorithm results in recommendations which are

more accurate. This is based on the strengths of the MoleTrust algorithm which only

propagates trust up to a predefined horizon and only along paths where the trust ratings

are above a predefined trust rating threshold. These two definitions ensure the quality of

a calculated trust value.

o The algorithm has excellent coverage because it considers the recommendations of both

trusted and similar users. As a result, more system users are reachable in order to

determine a trust-based predicted rating value.

The most competitive algorithms are the trust-based ones which perform quite well in most of

the categories due to their trust-based implementations.

The structural trust inference algorithm does not perform as well in all categories because of

the fact that trust is inferred outside of using explicit trust statements.

Page 104: PerTrust: leveraging personality and trust for group recommendations

Chapter 5 Trust in recommender systems

89

The poorest evaluated trust-based algorithm is that of trust-based filtering. The main reason

behind this is that the requirements for obtaining a recommendation are quite stringent. This

means that it often occurs that a recommendation cannot be retrieved.

5.6 Conclusion

In this chapter, a number of state-of-the-art algorithms were detailed and defined from both a practical

and theoretical perspective. The purpose of this was to provide a candidate trust-based algorithm for

use in the defined trust implementation framework. Therefore, five state-of-the-art algorithms were

presented and defined.

Trust-based weighted mean (Golbeck, 2005).

Trust-based collaborative filtering (Massa & Avesani, 2007a).

Trust-based filtering (O’Donovan & Smyth, 2005).

Structural trust inference (O’Doherty, 2012)

EnsembleTrustCF (Victor, 2010)

Once all of these trust-based algorithms were presented and defined, the results of each were

summarised and evaluated. The conclusion was that, up to this point, the EnsembleTrustCF algorithm

seems best suited for this research. However, an empirical evaluation of these algorithms still has to

be done with application to a real-life dataset to ascertain this. Therefore, this empirical evaluation is

reported on in the next chapter. Thereafter, a final trust-based recommendation algorithm is selected

for implementation in the trust framework for the proposed group recommender model.

Page 105: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

Page 106: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

91

6.1 Introduction

In the previous chapter, a number of state-of-the-art trust calculation and trust-based recommendation

algorithms were reviewed against a set of defined requirements. The purpose of this review was to

determine a suitable trust-based algorithm for a trust implementation framework. The review

concluded that the EnsembleTrustCF algorithm is best suited for the trust implementation framework

due to the fact that it performed best against each of its counterparts.

In this chapter, the formation of a trust implementation framework is concluded. Therefore, the

purpose of the chapter is to nominate a candidate trust calculation and trust-based recommendation

algorithm for this research. Whereas the previous chapter nominated a trust-based algorithm from a

comparative perspective, this chapter nominates a trust-based algorithm based on an empirical

evaluation of each trust-based algorithm. Once the results of this evaluation have been analysed, the

results of both the previous chapter and this chapter are considered with the selection of a final trust

calculation and trust-based recommendation algorithm being motivated.

The empirical evaluation discussed within this chapter is based on the study of the empirical

evaluation and results performed by two authors, namely Victor (2010) and O’Doherty (2012). The

reason for using, representing, and studying their results is twofold.

Their evaluations were conducted on the real-life Epinions.com (Epinions.com, n.d.) dataset.

This dataset is taken from a commercial, real-world application and therefore, real user

behaviour and real algorithm performance can be studied.

Both these authors conduct a study of similar trust-based algorithms using the same dataset.

Therefore, the performance of each trust-based algorithm can be evaluated meaningfully.

This chapter is structured as follows. In section 6.2, the datasets upon which the empirical evaluation

was done is introduced and discussed. Thereafter, section 6.3 introduces the measurements used to

determine the performance of each trust-based algorithm. Section 6.4 presents the baseline

algorithms to be used in each author’s empirical evaluation. Section 6.5 presents the results of each

author’s empirical evaluation. Section 6.6 then follows with a detailed discussion and analysis of these

results, concluding with a motivation for the selection of a trust-based algorithm to be implemented in

the defined trust implementation framework for this research. Lastly, section 6.7 brings the chapter to

a conclusion.

6.2 Datasets

The purpose of this section is to identify the characteristics of the datasets that are considered when

evaluating each of the trust-based algorithms reviewed in the previous chapter. The reason for this

evaluation is that the characteristics of the datasets used influence the final evaluation results.

Page 107: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

92

As a result, the approach taken in this section is to firstly give a background to the datasets used by

both Victor (2010) and O’Doherty (2012) in their empirical evaluations. Thereafter, the specifics of

each dataset are evaluated and presented.

6.2.1 Background

From the perspective of a recommender system, a dataset is a set of data containing rating and user

information. Therefore, a dataset can be seen as a rating matrix consisting of users and their

attributed ratings for items. The importance of evaluating a dataset is to understand the inherent

strengths and weaknesses of the dataset itself, which will affect the results of a particular algorithm

(Victor, 2010).

The datasets used for evaluation within this chapter are mined from the Epinions.com website

(O’Doherty, 2012; Victor, 2010). Epinions.com is an online review system where system users review

products so that other system users can obtain an objective perspective on the quality of a product

and decide whether to purchase it or not (Epinions.com, n.d.). There are two of reasons as to why this

dataset is commonly used as a means to evaluate trust-based algorithms.

The dataset contains both trust and distrust information (Massa & Avesani, 2007a;

Victor, 2010). Within the Epinions.com system, a user can express trust in another user who

consistently produces good and reliable reviews by adding them to their own trusted list

(Massa & Avesani, 2007a; Victor, 2010). Conversely, if a user is deemed unreliable and

inconsiderate, then they can be added to a block list, which is seen as an expression of

distrust (Massa & Avesani, 2007a; Victor, 2010). Datasets which contain both trust and

distrust information and which also can be analysed are uncommon (Victor, 2010). This is

what, therefore, makes it a suitable dataset for evaluation.

Epinions.com is a real-world application meaning that real-world user behaviour can

be analysed and tested against when running trust-based algorithms against it (Victor,

2010).

The particularly mined Epinions datasets which are analysed are taken from Guha et al. (2004) and

Massa and Bhattacharjee (2004). In Victor’s (2010) comprehensive evaluation of these datasets, they

are defined as the Epinions reviews dataset and the Epinions products dataset respectively. The

Epinions reviews dataset, created by Guha et al. (2004), contains all the product reviews as rated by

users on a scale from one, indicating “not helpful”, to five, indicating “most helpful” (Victor, 2010). The

Epinions products dataset, as created by Massa and Bhattacharjee (2004), contain the ratings of the

products themselves, also rated on a grading from one to five. In the following section, the structure

and makeup of both these datasets is evaluated.

Page 108: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

93

6.2.2 Epinions dataset evaluation

The structure and constituents of the two Epinions datasets, the Epinions reviews dataset and the

Epinions products dataset, as reviewed by Victor (2010), is presented in Table 6.1 below.

Epinions reviews dataset Epinions products dataset

Total users 163 634.00 49 290.00

Total reviews/products 1 560 144.00 139 738.00

Total ratings 25 170 636.00 N/A

Total users in trust network 114 222.00 49 288.00

Total trust/distrust statements 717 129.00 487 003.00

Number of controversial items 1 416.00 266.00

Table 6.1: Epinions products and reviews dataset (Victor, 2010)

A number of observations that can be made about the dataset statistics presented in Table 6.1:

Both the Epinions reviews and Products datasets are well populated, as opposed to

sparse, datasets. From a user perspective there are roughly 164,000 users within the

Epinions reviews dataset and about 49,000 users within the Epinions products dataset.

Additionally, there are about 1,500,000 reviews in the Epinions reviews dataset and about

140,000 reviews in the Epinions products dataset. From a trust and distrust perspective, there

are many explicitly defined trust links between users in both datasets (Victor, 2010).

There are a number of controversial items within each dataset. As noted in Chapter 5,

this is a key requirement and the performance of trust-based algorithms on these

controversial items is especially relevant (Victor, 2010).

Outside of these observations, there are more considerations to take note of with regards to the

content of both these datasets.

Although both datasets contain trust and distrust information, both trust and distrust

is indicated by a binary value (Victor, 2010). Therefore, 1 represents full trust and 0

represents full distrust (Victor, 2010). This is not ideal as gradual representations of trust are

preferred. However, as Victor (2010) notes, there are a very limited number of datasets which

one can study and which contain such trust information. As a result, Victor (2010) chose to

make do with this inherent limitation.

Because both trust and distrust are represented by a binary value, it was decided that

for each trust-based algorithm, the threshold for trust would be removed (Victor, 2010).

This is because it becomes unnecessary with a binary representation (Victor, 2010).

Within the Epinions reviews dataset, over 75% of all the reviews within the dataset

were attributed with a rating of five (Victor, 2010). Within the Epinions products dataset, a

Page 109: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

94

similar trend is seen, though not as pronounced (Victor, 2010). In this dataset, a total of 45%

of all reviews obtained a rating of five (Victor, 2010). The distribution of the Epinions products

dataset and the general trend towards higher ratings is shown in Figure 6.1 below (Massa &

Avesani, 2009; O’Doherty, 2012).

Figure 6.1: Epinions products dataset rating distribution (O’Doherty, 2012)

Before proceeding to the evaluation of the performance of the trust-based algorithms against both of

these datasets, the measures used to determine performance need to be determined first. Therefore,

the next section presents the various measures used for the purposes of evaluation.

6.3 Measurements

In this section, the various measurements used to evaluate the effectiveness of a trust-based

algorithm are discussed. In terms of the empirical evaluation conducted by O’Doherty (2012) and

Victor (2010), the main two measurements evaluated were accuracy and coverage. Both of these

measurements are, therefore, defined in this section.

It is noted that these measurements only represent two of the requirements defined in the previous

chapter. However, these are two of the most important and common criteria of a recommender

system evaluation. Therefore, the empirical performance of a trust-based algorithm in these two

requirements will indicate a suitable trust-based algorithm for a trust implementation framework.

6.3.1 Accuracy

The accuracy of a trust-based algorithm is commonly determined by making use of the leave-one-out

method (Victor, 2010). In this method, the actual rating attributed to an item is compared with a trust-

43 228.00 50 678.00

75 525.00

194 339.00

301 053.00

-

50 000.00

100 000.00

150 000.00

200 000.00

250 000.00

300 000.00

350 000.00

1 2 3 4 5

Nu

mb

er

of

rati

ng

s

Value of ratings

Epinions products dataset

Page 110: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

95

based predicted rating for that same item (Victor, 2010). Accuracy is based upon how closely

matched the predicted trust-based rating is in comparison to the actual rating (Victor, 2010). The less

the deviation between these two ratings, the more accurate the trust-based recommendation

algorithm is deemed to be (Victor, 2010). Conversely, the greater the deviation, the less accurate the

recommendation algorithm is determined to be (Victor, 2010).

In terms of measuring the accuracy of the predicted rating versus the original rating from an

algorithmic perspective, two specific algorithms are commonly applied (Victor, 2010):

The mean absolute Error (MAE) algorithm. This algorithm considers both small and large

deviations equally.

Root mean squared error (RMSE) algorithm (Victor, 2010). This algorithm highlights those

times when there is a large deviation between the predicted and actual rating (Victor, 2010).

Both the MAE and RMSE algorithms are defined in Formula 6.1 below.

6.3.2 Coverage

Coverage, within a recommender system is determined by evaluating the possible number of

predicted ratings versus the total number of ratings within the dataset (Shani & Gunawardana, 2011;

Victor, 2010). This is formally defined in Formula 6.2 below.

𝑀𝐴𝐸 = 𝑟𝑖 − 𝑝𝑖 𝑛𝑖=

𝑛

𝑅𝑀𝑆𝐸 = 𝑟𝑖 − 𝑝𝑖 𝑛

𝑖=

/ 𝑛

For an item i and n number of leave-one-out experiments, MAE and RMSE is defined as:

Where 𝑟𝑖 represents the item rating and 𝑝𝑖 represents the predicated rating.

Formula 6.1: Mean average error (MAE) and root mean squared error (RMSE) (Victor,

2010)

Page 111: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

96

In this section, the measurements used to evaluate the performance of trust-based algorithms was

presented and defined. These measurements were accuracy and coverage. In the next section, the

baseline algorithms used by both Victor (2010) and O’Doherty (2012) are presented. The relevance of

this section is that these baseline algorithms are used as a base in determining the performance of

each trust-based algorithm.

6.4 Baseline algorithms

In this section, the baseline algorithms used in the empirical evaluation of trust-based algorithms

conducted by both Victor (2010) and O’Doherty (2012) are presented and defined. The purpose of

presenting these baseline algorithms is to verify whether there are any benefits in making use of trust-

based algorithms over more trivial algorithms in a real-world application.

The baseline algorithms used in the evaluations conducted by both Victor (2010) and O’Doherty

(2012) are presented in Table 6.2 below.

Algorithm name Algorithm description Algorithm rationale

B1 – Base: Score 5

(Victor, 2010) Returns a rating score of 5

for every item (Victor,

2010).

The majority of items within both the

Epinions reviews and products datasets

contain the maximum rating of 5 (Victor,

2010).

B2 – Base: Average

Score for Item (Victor,

2010)

Returns the average of all

ratings attributed to that

specific item (Victor, 2010).

Determines whether the trust-based

algorithms better the average (Victor,

2010).

B3 – Base: Average

Score for the Source User

(Victor, 2010)

Returns the average of all

ratings attributed by the

source user (Victor, 2010).

Determines whether the trust-based

algorithms better the average of a user’s

rating scores (Victor, 2010).

B4 – Base: Random

Score (Victor, 2010) Returns a random rating Determines whether the trust-based

𝐶𝑜𝑣𝑒𝑟𝑎𝑔𝑒 = 𝑐𝑜𝑚𝑝 𝑝𝑖

𝑛𝑖=

𝑛

For an item i, a calculated item prediction, 𝑝𝑖 , and n ratings within the dataset, coverage is

defined as:

Where 𝑐𝑜𝑚𝑝 𝑝𝑖 = 1, if it is possible for 𝑝𝑖 to be calculated, and 0, if it cannot be calculated.

Formula 6.2: Coverage (Victor, 2010)

Page 112: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

97

score between 1 and 5

(Victor, 2010).

algorithms can outperform a random score

(Victor, 2010).

B5 – Base: Collaborative

Filtering (Victor, 2010) Returns the item rating as

determined by using the

standard collaborative

filtering algorithm (Victor,

2010).

Determines whether the trust-based

algorithms can outperform collaborative

filtering and overcome its inherent

disadvantages (Victor, 2010).

Table 6.2: Baseline algorithms used for evaluation on Epinions dataset

The definition of the baseline algorithms in this section concludes the background discussion on the

evaluation of the trust-based algorithms conducted by Victor (2010) and O’Doherty (2012).

Consequently, the next section details the evaluation results and how each trust-based algorithm

performed on each dataset with regards to both accuracy and coverage.

6.5 Evaluation

In this section, the evaluation results recorded by both Victor (2010) and O’Doherty (2012) are

presented. Before formally presenting these results, however, there is a note to make with regards to

the evaluation conducted by O’Doherty (2012). Whereas Victor’s (2010) evaluation is a direct

application of the accuracy and coverage algorithms against the Epinions reviews dataset, the

evaluation conducted by O’Doherty (2012) against the Epinions products dataset was conducted

differently.

In O’Doherty’s (2012) approach, the aim of their evaluation was twofold. The first aim was to evaluate

the performance of trust-based algorithms when trust was inferred using their structural trust inference

algorithm (O’Doherty, 2012). Consequently, the trust and distrust statements already in the Epinions

products dataset were not considered. The second aim was to determine the performance of each

trust-based algorithm when there were different percentages of ratings missing from the dataset

(O’Doherty, 2012). Therefore, in order to achieve both these aims, a number of steps were followed in

their evaluation:

Data samples of a 1000 users each were selected from the Epinions products dataset

(O’Doherty, 2012).

For each subset of a 1000 users, a certain percentage of ratings were removed from the

dataset. This first subset had 10% of their rates removed, the second had 20% removed, the

third 30%, and the fourth 50%. These subsets were then used as the basis for determining a

rating with each recommendation algorithm (O’Doherty, 2012).

Run the structural trust inference formula developed by O’Doherty (2012) on the remaining

subsets of rating data. This formed a trust graph linking the users within the subset

(O’Doherty, 2012).

Page 113: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

98

Take each trust-based algorithm and then apply it to the resulting subset to predict the ratings

which had been removed (O’Doherty, 2012).

Now that the manner of evaluation by O’Doherty (2012) has been explained, the evaluation results

can be formally presented. Table 6.3 and Table 6.4 present the outcomes of the evaluation performed

by Victor (2010) on the Epinions reviews and the Epinions products dataset respectively, while Table

6.5 represents the evaluation conducted by O’Doherty (2012).

Algorithm Controversial items Randomly selected

items

Coverage MAE RMSE Coverage MAE RMSE

(B1) Base: Score 5 100% 1.45 1.96 100% 0.16 0.51

(B2) Base: Average score for item 100% 1.25 1.34 100% 0.18 0.40

(B3) Base: Average score for user 99% 1.23 1.58 99% 0.36 0.50

(B4) Base: Random score 100% 1.61 2.02 100% 1.92 2.37

(B5) Collaborative filtering 94% 0.96 1.13 98% 0.19 0.38

(P1) Propagated trust-based weighted

mean

88% 0.91 1.22 97% 0.15 0.38

(P2) Propagated trust-based CF 88% 0.99 1.16 97% 0.19 0.37

(P3) Propagated trust-based filtering 84% 0.94 1.13 96% 0.18 0.36

(P4) Propagated EnsembleTrustCF 96% 1.00 1.16 99% 0.20 0.38

Table 6.3: Victor’s (2010) results on the Epinions reviews dataset

Page 114: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

99

Algorithm Controversial items Randomly selected

items

Coverage MAE RMSE Coverage MAE RMSE

(B1) Base: Score 5 100% 1.94 2.46 100% 1.05 1.62

(B2) Base: Average score for item 100% 1.35 1.51 100% 0.82 1.06

(B3) Base: Average score for user 98% 1.43 1.78 99% 0.95 1.22

(B4) Base: Random score 100% 1.66 2.08 100% 1.68 2.10

(B5) Collaborative filtering 81% 1.34 1.58 79% 0.84 1.12

(P1) Propagated trust-based weighted

mean

76% 1.37 1.69 72% 0.90 1.23

(P2) Propagated trust-based CF 76% 1.32 1.56 72% 0.84 1.12

(P3) Propagated trust-based filtering 57% 1.36 1.64 53% 0.86 1.16

(P4) Propagated EnsembleTrustCF 90% 1.32 1.55 88% 0.82 1.09

Table 6.4: Victor’s (2010) results on the Epinions products dataset

Page 115: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

100

Table 6.5: O’Doherty’s (2012) results on the Epinions products dataset

Algorithm 10% Hidden 20% Hidden 30% Hidden 50% Hidden

Coverage MAE RMSE Coverage MAE RMSE Coverage MAE RMSE Coverage MAE RMSE

(B2) Base: Average score for item 96.99% 0.898 1.290 93.88% 0.904 1.303 89.73% 0.923 1.328 79.87% 0.935 1.349

(B5) Collaborative filtering 49.17% 0.966 1.356 41.74% 0.991 1.391 34.69% 1.012 1.413 20.94% 1.048 1.462

(T1) Trust-based weighted mean 83.37% 0.919 1.323 79.28% 0.941 1.345 73.76% 0.952 1.368 59.86% 0.981 1.410

(T2) Trust-based CF 83.37% 0.927 1.308 79.28% 0.946 1.329 73.76% 0.965 1.356 59.86% 1.004 1.403

(T3) Trust-based filtering 48.47% 1.029 1.360 40.80% 1.050 1.394 32.90% 1.074 1.422 18.80% 1.110 1.470

(T4) EnsembleTrustCF 83.37% 0.927 1.308 79.28% 0.946 1.330 73.76% 0.965 1.356 59.86% 1.004 1.403

Page 116: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

101

6.6 Evaluation of the results

In this section, the results presented in Section 6.5 are discussed and analysed with various

observations made. The means by which this is done is to derive general observations, observations

with regards to the evaluation of the baseline strategies, and then observations with regards to the

trust-based algorithms evaluated. Each set of observations are made with reference to the

measurements of coverage and trust. The observations derived from this evaluation will then form the

basis of motivation for the selection of a trust-based algorithm to be used in a trust implementation

framework for the proposed group recommender model.

6.6.1 Observations on Victor’s (2010) evaluation of the Epinions reviews dataset

a) Coverage

The observations with regards to the coverage results of Victor’s (2010) evaluation on the

Epinions reviews dataset is listed below. These observations are made with reference to

Table 6.3 above.

The coverage results for the baseline algorithms B1, B2, and B4 are always 100% as

they do not require any trust or similarity data for rating (Victor, 2010). However, for

baseline algorithm B3, the coverage is slightly off due to the fact that there are users

within Victor’s (2010) Epinions dataset who have only rated a single item resulting in

their exclusion. These results are relevant for both randomly selected items and

controversial items.

The coverage results for the B5 algorithm is also quite high with a 94% coverage rate

for controversial items and a 98% coverage rate for randomly selected items. Victor

(2010) attributes this to the fact that it is easier to determine similarity than it is to

determine trust in this dataset as the dataset is full with many ratings attributed.

The coverage results for trust-based algorithms P1, P2, P3, and P4 are high for

randomly selected items, achieving coverage rates of 97%, 97%, 96%, and 99%

respectively. However, for controversial items, the coverage rate of the trust-based

algorithms P1, P2, and P3 lowers to 88%, 88%, 84% and 96% respectively. Victor

(2010) comments that this lower performance on controversial items results from the

fact that there are fewer ratings for controversial items. This makes it harder to

determine a trust-based rating.

b) Accuracy

The observations with regards to the accuracy results of Victor’s (2010) evaluation on the

Epinions reviews dataset is listed below. These observations are made with reference to

Table 6.3 above.

The accuracy results for the baseline algorithms B1, B2, and B3 are significantly

different between controversial items and randomly selected items. For randomly

Page 117: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

102

selected items, the MAE scores are 0.16, 0.18, and 0.36. For controversial items

these MAE scores rise to 1.45, 1.25, and 1.23. The higher accuracy results for

randomly selected items can be attributed to the fact that the majority of ratings in the

dataset have been attributed with a rating of five. Therefore, the averaging of these

ratings yields a positive accuracy rating. However, because controversial items

contain both high and low ratings, the averaging of ratings results in a larger scale of

error. However, the B4 algorithm displays different logic in comparison to the other

baseline algorithms because it merely predicts a random score.

The accuracy results for the B5 algorithm also suffer the same trend as the baseline

algorithms. For randomly selected items, the MAE and RMSE scores are 0.19 and

0.38 respectively. For controversial items, these scores rise to 0.96 and 1.13

respectively. Victor (2010) attributes this to the accuracy and coverage trade off. The

greater the coverage, the less accurate the results (Victor, 2010).

The accuracy results for the trust-based algorithms P1 to P4 all follow the same trend

as the other baseline algorithms. In each trust-based algorithm, there is a large

difference between the MAE and RMSE scores for randomly selected items and

controversial items. It is noted that these MAE and RMSE scores are relatively

competitive with the collaborative filtering scores. Therefore, these particular results

reveal that there is little impact on accuracy in this evaluation for the trust-based

algorithms (Victor, 2010).

6.6.2 Observations on Victor’s (2010) evaluation of the Epinions products dataset

a) Coverage

The observations with regards to the coverage results of Victor’s (2010) evaluation on the

Epinions products dataset is listed below. These observations are made with reference to

Table 6.4 above.

The coverage results for baseline algorithms B1 to B4 are very similar to the

coverage results in Victor’s (2010) evaluation of the reviews dataset. The reason for

this lie in the fact that the same baseline algorithm is used for both datasets. As a

result, the coverage observations are the same as the ones for the reviews dataset.

The coverage results for baseline algorithm B5 are quite lower than those of the

Epinions reviews dataset. On the Epinions reviews dataset, B5 had a coverage

percentage of 94% for controversial items and 98% for randomly selected items.

However, on the Epinions products dataset, this is reduced to 81% for controversial

items and 79% for randomly selected items. This lower coverage is due to the fact

that there are fewer ratings attributed in this dataset in comparison with the Epinions

products dataset, making it harder to determine similarity (Victor, 2010).

The coverage results for the trust-based algorithms, P1 to P4 are also lower in

comparison to the Epinions reviews dataset. The reason for this follows the same

Page 118: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

103

reason as for collaborative filtering: fewer ratings attributed within the dataset make it

harder to determine trust-based recommendations (Victor, 2010). Algorithm P3

especially struggles with coverage as it requires users who are similar as well as who

have are trusted as the basis for a recommendation. These requirements make it

even more difficult to determine a trust-based recommendation. This explains the

much lower coverage percentages of 53% for randomly selected items and 57% for

controversial items. One trust-based algorithm that is significantly higher than

algorithms B5, P1, P2, and P3 is trust-based algorithm P4. In these results, the P4

algorithm achieves a coverage percentage of 90% for controversial items and 88%

for randomly selected items. The P4 trust-based algorithm, EnsembleTrustCF,

highlights the effectiveness in coverage when using both trust and similarity in

determine recommendations.

b) Accuracy

The observations with regards to the accuracy results of Victor’s (2010) evaluation on the

Epinions products dataset are listed below. These observations are made with reference to

Table 6.4 above.

The accuracy results for baseline algorithms B1 to B4 follow a similar trend as to the

accuracy results in the Epinions reviews dataset. This trend revealed the significant

difference in MAE and RMSE scores between randomly selected items and

controversial items was due to the large number of times the maximum rating of five

is attributed. As a result, the accuracy observations are the same as the ones for the

reviews dataset.

The accuracy results for baseline algorithm B5 also reflect the trends identified in the

Epinions reviews dataset. The only difference to note between these results,

however, is the accuracy results are worse in comparison to the accuracy results in

the Epinions reviews dataset. According to Victor (2010), this worsened performance

is due to the large number of controversial items in comparison to the Epinions

reviews dataset.

The accuracy for the trust-based algorithms, P1 to P4 follow the same trend as

identified in the Epinions reviews dataset, namely that there is a difference in MAE

and RMSE scores between randomly selected and controversial items. However,

these scores are worse. As with collaborative filtering, this could also be attributed to

the large number of controversial items in the dataset (Victor, 2010). In line the

results of the Epinions reviews dataset, the accuracy scores of the trust-based

algorithms are competitive with the B5 algorithm accuracy scores. Again, this

indicates that, for this evaluation, there is little benefit in accuracy between the B5

algorithm and the trust-based algorithms.

Page 119: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

104

6.6.3 Observations on O’Doherty’s (2012) evaluation of the Epinions products dataset

a) Coverage

The observations with regards to the coverage results of O’Doherty’s (2012) evaluation on the

Epinions products dataset is listed below. These observations are made with reference to

Table 6.5 above.

The coverage results for baseline algorithm B2 remains consistently high across all

experiments i.e. where 10%, 20%, 30%, and 50% of the results were hidden. For

each of these experiments, the coverage remains significantly higher than the other

algorithms in comparison. The reason for this superior performance is that the

algorithm merely performs an average of ratings for an item and does not have to

process or filter the ratings (O’Doherty, 2012).

The coverage results for baseline algorithm B5 is one of the poorest. The reason for

this is that the dataset used is relatively sparse making it more difficult to determine a

recommendation (O’Doherty, 2012). The coverage results get worse as more ratings

are hidden from the dataset. The coverage percentage starts out at 49.17% when

10% of the ratings are hidden and the ends up at 20.94% when 50% of the ratings

are hidden.

The coverage results for trust-based algorithms T1, T2, and T4 maintain relatively

high coverage results across each experiment. These coverage results are much

higher than its B5 counterpart. However, it is noted that the T3 algorithm achieves a

much lower coverage percentage. As previously observed, this is due to the stringent

requirements of the algorithm requiring system users who are both similar and

trusted.

b) Accuracy

The observations with regards to the accuracy results of O’Doherty’s (2012) evaluation on the

Epinions products dataset is listed below. These observations are made with reference to

Table 6.5 above.

The accuracy results of the baseline B2 algorithm surprisingly perform quite well

across all experiments in comparison to the other algorithms. As was previously

observed and as is pointed out by O’Doherty (2012), the majority of ratings in this

dataset have been attributed ratings of five. Therefore, an average algorithm reflects

quite well in terms of accuracy.

The accuracy results of the baseline B5 algorithm are worse in comparison to the T1,

T2, and T4 algorithms. As per the observations noted in the section on coverage, this

dataset is quite sparse, making it difficult to accurately predict a recommendation.

Consequently, the MAE and RMSE scores are adversely affected.

The accuracy results of the T1, T2, and T4 algorithms are competitive and perform

comparatively well against the other algorithms. This is because a trust-based

Page 120: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

105

implementation is able to maintain good performance in a sparse dataset. As with

the coverage results, the T3 algorithm is the worst performing algorithm. The same

observations as were noted in the coverage section for this algorithm, apply here.

6.6.4 Motivation for a trust-based algorithm

The purpose of this section is to motivate the selection of a trust-based algorithm based on the

findings of the previous chapter as well as the results of the empirical analysis presented here. In the

previous chapter, the EnsembleTrustCF algorithm was nominated as the candidate trust-based

algorithm for use within the defined trust implementation framework.

Based on the empirical evaluation of trust-based methods in this chapter, the trust-based algorithm

nominated for use in the trust implementation framework is the EnsembleTrustCF algorithm. There

are a number of reasons for this.

In Victor’s (2010) evaluation of trust-based methods on the Epinions reviews and products

datasets, it was observed that the EnsembleTrustCF algorithm was not that much better in

terms of accuracy when compared with the collaborative filtering algorithm and other trust-

based algorithms. However, the MAE and RMSE scores in both results sets showed that the

EnsembleTrustCF algorithm remained competitive. In O’Doherty’s (2012) evaluation,

however, with ratings removed, the EnsembleTrustCF algorithm was able to perform far

better than its collaborative filtering counterpart with regards to accuracy. It was observed that

this was due to its trust-based implementation.

The coverage performance of the EnsembleTrustCF algorithm in both Victor’s (2010) and

O’Doherty’s (2012) evaluations was the best or tied with the best coverage scores in

comparison to the collaborative filtering and trust-based algorithms. In Victor’s (2010)

datasets, this increased coverage performance was due to the fact that both trust and

similarity is used in the algorithm to derive a recommendation. In O’Doherty’s (2012)

evaluation, the coverage performance relates to its trust-based implementation in a sparse

dataset. This means that recommendations can be determined more often.

Therefore, based on the observed results of the EnsembleTrustCF algorithm in both Victor’s (2010)

and O’Doherty’s (2012) evaluation as well as the evaluation conducted in the previous chapter, the

EnsembleTrustCF trust-based algorithm is nominated as the candidate algorithm to be implemented

in the trust implementation framework for this research.

6.7 Conclusion

In this chapter, the empirical analyses of Victor (2010) and O’Doherty (2012) was evaluated and

studied. The purpose of this was to examine the real-world application and performance of the trust-

algorithms introduced in the previous chapter. The result of this analysis was that the

Page 121: PerTrust: leveraging personality and trust for group recommendations

Chapter 6 Empirical evaluation of trust-based algorithms

106

EnsembleTrustCF algorithm was identified and motivated as the most suitable trust-based algorithm

to be implemented in the trust implementation framework defined for the proposed group

recommender model.

Now that a trust implementation framework has been defined for the proposed group recommender

model, the various processes of group recommendation are to be discussed. In these processes, the

selected EnsembleTrustCF algorithm is used quite extensively as the methodology of implementing

trust in this research. As a result, the next part of this dissertation, Part I(b), details the following group

recommendation processes in chapters 7 to 10.

Preference elicitation. Chapter 7 details the process of eliciting a list of top-N

recommendations for each system user in a group.

Group-based rating prediction. Chapter 8 details how trust and personality is used to

influence a system user’s personal top-N recommendation list.

Aggregation. Chapter 9 details how a group recommendation is formed through the

aggregation of each system user’s trust and personality affected top-N recommendation list.

Satisfaction. Chapter 10 details how to determine satisfaction for both individuals as well as

group members.

Consequently, the first group recommendation process to be detailed is the preference elicitation

process. This is discussed in the next chapter.

Page 122: PerTrust: leveraging personality and trust for group recommendations

Part I(b) Group recommendation processes

Chapters 7-10

Page 123: PerTrust: leveraging personality and trust for group recommendations

Chapter 7 Group recommendation: preference elicitation

Page 124: PerTrust: leveraging personality and trust for group recommendations

Chapter 7 Group recommendation: preference elicitation

109

7.1 Introduction

Preference elicitation is the process of determining and extracting the personal preferences of each

member within a group (Jameson & Smyth, 2007; Salamó et al., 2012). The results of the preference

elicitation process are fed as input into the aggregation process where all personal preferences are

used as the basis of a group recommendation.

The importance of this step in the group recommendation process cannot be understated. In order to

have the group recommender system determine a reliable and satisfactory recommendation, high

quality inputs are needed (Quijano-Sanchez et al., 2013). Therefore, the better the individual

preferences elicited for each group member, the better the input into the aggregation process. As a

result, this chapter provides the detail and analysis of the process that supports the definition of the

model for a group recommender system.

This chapter is set out as follows. The first section introduces a scenario. This forms the basis of

explanation and application of not only the preference elicitation process, but also for all further group

recommendation processes. Thereafter, a background discussion is had on the topic of preference

elicitation. In the section that follows, the relevant algorithm is introduced, with an example application

concluding the chapter.

7.2 Scenario

In this section, a scenario is laid out for the purposes of reference for future sections within this

chapter as well as future chapters on group recommendation. Consider the following scenario:

Assume that a system user, Adam, is touring Johannesburg with his friends Ben, Craig, Darren, and

Ed. On a particular day, they query their group recommender system for potential tourist attractions

for them to visit as a group. Consequently, Adam, the group administrator, adds his friends as inputs

into the system and queries the system to identify a tourist recommendation for them to visit. The

group as well as the relevant group configuration, formed through Adam’s use of the system, is

illustrated in Figure 7.1 below.

From a high-level perspective, with reference to Figure 7.1, each of Adam’s friends with their top rated

tourist recommendations can be identified. Additionally, Adam’s social network is also presented with

his explicitly assigned trust levels for each direct neighbour. Following on from this, the process of

preference elicitation is the focus of discussion.

Page 125: PerTrust: leveraging personality and trust for group recommendations

Chapter 7 Group recommendation: preference elicitation

110

Group

Adam

Ben

Craig

Darren

Ed

8Ben

Franck

Greg

Craig

Harry

Ivan

James

Ben - Ratings

Union Buildings – 5 Stars

Montecasino – 4 Stars

Church Square – 3 Stars

Lilliesleaf Farm – 2 Stars

Craig - Ratings

Orlando Towers – 5 Stars

Gold Reef City – 5 Stars

Johannesburg Zoo – 4 Stars

Botanical Gardens – 4 Stars

Darren - Ratings

Montecasino – 5 Stars

Nelson Mandela Square – 3 Stars

Freedom Park – 3 Stars

Voortrekker Monument – 3 Stars

Ed - Ratings

Planetarium – 5 Stars

Apartheid Museum – 4 Stars

Union Buildings – 4 Stars

Brightwater Commons – 4 Stars

9

8

Adam’s Social Network

Adam - Ratings

Apartheid Museum – 5 Stars

Gold Reef City – 4 Stars

Montecasino – 4 Stars

Voortrekker Monument – 3 Stars

Figure 7.1: Group recommendation scenario

7.3 Background

In this section, a background discussion is had with regards to the practical application of preference

elicitation within the group recommender model.

The preference elicitation methodology followed by this research is to apply a trust-based algorithm in

order to generate a top-N list of preferences for each individual group member. Because preference

elicitation is analogous to individual user recommendation, most group recommender systems merely

make use of more simple algorithms, with collaborative filtering being the most common (Baltrunas et

al., 2010; Cantador & Castells, 2012; Chen et al., 2008; Kim et al., 2010; Masthoff, 2011; Pera & Ng,

2012). However, if the quality of individual user preferences can be improved, a more satisfactory

group recommendation will be achieved (Quijano-Sanchez et al., 2013). Because of the inherent

advantages of using trust, as discussed in previous chapters, this approach is deemed beneficial by

the researcher.

Following this design decision, the EnsembleTrustCF algorithm (Victor, 2010) is chosen as the means

for preference elicitation. This trust-based algorithm may seem rather simple and intuitive to apply to

generate a recommended rating for a specific group member. However, the difficulty faced is that a

top-N list of preferences needs to be derived for each group member. The EnsembleTrustCF

algorithm (Victor, 2010) only has the ability to do one recommendation at a time. As a result, the

algorithm needs to be adapted so that a top-N list can be determined as per the requirement.

Page 126: PerTrust: leveraging personality and trust for group recommendations

Chapter 7 Group recommendation: preference elicitation

111

The approach taken to do this adaptation is based on the work of Jamali and Ester (2009). In this

work, the authors detail how they modified their TrustWalker algorithm to output a top-N list of

recommendations. A similar approach is followed in adapting the EnsembleTrustCF algorithm (Victor,

2010). The specific process of adapting this algorithm is the topic of discussion in the next section.

7.4 Adapting the EnsembleTrustCF algorithm

The manner and means by which the EnsembleTrustCF algorithm (Victor, 2010) is adapted to

generate a set of top-N recommendations, instead of a single recommendation, is now discussed.

This section is structured as follows.

1. The prerequisites for the adapted EnsembleTrustCF algorithm (Victor, 2010) are noted.

2. The process that is followed in this implementation of the algorithm is split into three steps.

o In the first step, those users who are referenced by the algorithm are determined.

o In the second step, the top-N recommendations are retrieved from each of these

nominated users.

o In the third and last step, a top-N list of recommendations is determined for the

source user. Each of these sections is detailed below.

7.4.1 Define prerequisites

In the original analysis of the EnsembleTrustCF algorithm (Victor, 2010), a number of prerequisites

within the formula were noted. However, for the purposes of generating a top-N list of

recommendations, some of these prerequisites need to be changed. The adapted EnsembleTrustCF

algorithm (Victor, 2010) has the following prerequisites.

A maximum depth is defined to ensure that the algorithm does not go too far out

(Massa & Avesani, 2007a; Golbeck, 2005; Victor, 2010).

o This is to ensure that the advantages of trust can be leveraged. If the algorithm goes

too far out, trust is less useful and reliable.

To be included as part of the recommendation calculation, one of the two below criteria

have to be met (Victor, 2010).

o A system user must have a trust value above a predefined trust rating threshold or;

o A system user must have a positive similarity measure, where similarity is determined

by the Pearson correlation coefficient.

N, an integer, needs to be defined so that the algorithm can know how many

recommendations it has to generate for the source user.

o For example, if N is set to four, then the top four recommendations from the source

user’s recommendation list is selected.

A recommendation item is only added to the top-N list of preferences for a user if it is

above a predefined rating threshold (Massa & Avesani, 2007a; Golbeck, 2005; Jameson

& Smyth, 2007; Victor, 2010).

Page 127: PerTrust: leveraging personality and trust for group recommendations

Chapter 7 Group recommendation: preference elicitation

112

o This ensures that only the highest recommendations are considered and that

negative ratings are excluded.

No recommendations already rated or experienced by an individual user are

considered in the generation of their top-N list of preferences (Kim et. al., 2010).

o This is to ensure that the same recommendation items are not continually

recommended for the specific source user.

Now that the relevant prerequisites have been defined for the adapted EnsembleTrustCF algorithm,

an explanation as to how it is implemented using the three identified steps can begin.

7.4.2 Stepwise preference elicitation process

a) Step 1: Select trusted or similar users

The first step in the adapted EnsembleTrustCF algorithm is to select those users from which

recommendations will be queried (Victor, 2010). This is done in the same way as in the

original algorithm.

In the original EnsembleTrustCF algorithm, users within a predefined depth were selected

based on either their trust value or their similarity measure with the source user (Victor, 2010).

In other words, if a user has a direct or inferred trust valuation above a threshold, they are

selected (Victor, 2010). Alternatively, if a user has a positive Pearson correlation coefficient,

they are also considered (Victor, 2010). As with the original algorithm, trust is prioritised over

similarity (Victor, 2010). Therefore, those users who are trusted are considered ahead of

those who have positive similarity measures (Victor, 2010).

b) Step 2: Retrieve recommendations

Once the recommender system has identified the users to be considered by the adapted

algorithm, a list of recommendations can then be retrieved. This is done is as follows.

Each user considered by the adapted EnsembleTrustCF algorithm (Victor, 2010) is traversed

and queried for their respective top-N recommendations. The top-N recommendations for a

particular user are all those recommendations above a predefined rating threshold and not

already rated by the source user. A top-N set of recommendations is determined by

multiplying the source user’s trust or similarity score for the relevant user with that user’s

explicit rating score for each recommendation item. This list of recommendation items is then

ordered from highest to lowest with the top-N recommendations being selected. If there are

more than N recommendations in this list, then only the top-N recommendations are retrieved.

If there are less than N recommendations, then the entire list is retrieved. If there are no

recommendations returned, the algorithm simply polls the next user.

Page 128: PerTrust: leveraging personality and trust for group recommendations

Chapter 7 Group recommendation: preference elicitation

113

Once the algorithm has visited each trusted and similar user, a list of all retrieved

recommendations is returned to the source user.

c) Step 3: Select the top-N recommendations

The last step required in order to obtain a group member’s top-N preference list is to select a

final top-N list of recommendations. This is done by following a twofold process.

If the user from which the recommendation was retrieved is a trusted user, then a

predicted trust rating is determined for the recommendation item using the

EnsembleTrustCF algorithm. However, if the user from which the recommendation

was retrieved is a similar user, then the similarity weighted rating score calculated in

the previous step is used.

The list of similarity and trust scored recommendation items are merged together into

a single list and ordered from the highest score to the lowest score.

Upon the conclusion of these two steps, the top-N recommendations are taken from the final

list and presented to the group member as their top-N set of preferences. To ensure that

there are not duplicate recommendation items in the list, once an item has been selected, it is

removed from the list with the next highest unique item selected next.

In order to obtain a greater practical understanding of this preference elicitation process, the

adapted EnsembleTrustCF algorithm (Victor, 2010) is illustrated next.

7.5 Example application

The preference elicitation process is now detailed with reference to the scenario outlined in Section

7.2. For the purposes of this illustration, the elicitation of a list of top-N preferences for Adam is the

focus. However, the preference elicitation process for Ben, Craig, Darren, and Ed follows a similar

pattern.

The discussion of the example application follows the same structure as in the previous section.

Firstly, the prerequisites are defined for the adapted EnsembleTrustCF algorithm (Victor, 2010). In the

next section, the three preference elicitation steps of selecting users, retrieving recommendations,

and selecting a top-N set of preferences follows.

7.5.1 Defining prerequisites for the algorithm

In order to elicit Adam’s Top-N preference list five prerequisites need to be defined:

Define the maximum depth of the algorithm. The maximum depth is set to two meaning

that the algorithm does not go out to more than two users from the source.

Page 129: PerTrust: leveraging personality and trust for group recommendations

Chapter 7 Group recommendation: preference elicitation

114

Identify those users who are considered by the algorithm. Either trusted or similar users,

where the trust or similarity measure is above a threshold, as well as users who are within the

predefined maximum depth measure are considered. For this scenario application, only

trusted users are considered. Therefore, the trust threshold is set to seven. This means that

Ben, Franck, and Greg are considered.

Define N, the number of recommendations to retrieve from each user as well as the number

of recommendations finally presented to the source user. For the purposes of this example

application, N is set to four.

Define the rating threshold. This has been set to two out of five, meaning that only ratings

which exceed a rating of two are considered for each user.

Ensure that the recommendations already rated by the source user are not included in

the top-N recommendation list for that specific user. Because this prerequisite is enforced

whilst polling each trusted or similar user, it is detailed practically in further sections.

Each of the above predefined prerequisite variables is given in Table 7.1 for the purposes of summary

and easy reference.

Variable Definition Value

N Number of recommendations to retrieve from each polled user. 4

Maximum

Depth

How far out the algorithm will traverse. 2

Trust

Threshold

The minimum trust value which must be exceeded if users are to be

polled.

7 (out of

10)

Rating

Threshold

The minimum rating value which must be exceeded for a

recommendation to be considered.

2 (out of

5)

Table 7.1: Preference elicitation: Variables for example application

7.5.2 Stepwise preference elicitation process

a) Step 1: Select trusted or similar users

Only those users who Adam trusts are considered by the adapted EnsembleTrustCF

algorithm (Victor, 2010). Additionally, as per the prerequisites defined in the previous section,

each of these users must have a trust value greater than seven and be within the maximum

depth limit of two. With reference to Adams social network in Figure 7.1, it can be seen that

the users which meet this criteria are Ben, Franck, and Greg. Each are therefore polled in the

next step.

Page 130: PerTrust: leveraging personality and trust for group recommendations

Chapter 7 Group recommendation: preference elicitation

115

b) Step 2: Retrieve recommendations

The adapted EnsembleTrustCF algorithm (Victor, 2010) now polls Ben, Franck, and Greg for

their top-4 list of recommendations, discussed next.

Assume that these users each have rating profiles as detailed in Table 7.2 below.

Ben: Trust score - 8 Franck: Trust score – 9 Greg: Trust score - 8

Location Rating Location Rating Location Rating

Union Buildings 5 stars Orlando Towers 5 stars Montecasino 5 stars

Montecasino 4 stars Gold Reef City 5 stars Nelson Mandela Square 3 stars

Church Square 3 stars Johannesburg Zoo 4 stars Freedom Park 3 stars

Lilliesleaf Farm 2 stars Botanical Gardens 4 stars Voortrekker Monument 3 stars

Table 7.2: Example application: Rating and trust profiles for Ben, Franck, and Greg

The algorithm visits Ben first and queries him for his top-4 recommendations. The list of top-4

recommendations retrieved needs to be within the constraints of the defined prerequisites.

Specifically, in this case, only those recommendations with a rating above two and those

recommendations which have not already been rated by Adam are considered. Therefore,

because Montecasino has already been rated by Adam and Lilliesleaf farm is below the rating

threshold of two, these recommendations are removed from Ben’s top-4 recommendation list.

Consequently, the final top-4 list returned by Ben includes the Union Buildings and Church

Square, illustrated in Table 7.3 below.

Ben: Trust score - 8

Location Rating

Union Buildings 5 stars

Church Square 3 stars

Table 7.3: Example application: Ben’s top-4 recommendation list

Ben’s top-4 recommendation list is now amended by multiplying the explicit trust rating with

the relevant rating score. Therefore, a top-4 list of trust amended ratings for Ben is given in

Table 7.4 below.

Ben

Location Rating Trust score Trust amended rating score

Union Buildings 5 stars 8 40

Church Square 3 stars 8 24

Table 7.4: Example application: Ben’s top-4 trust amended recommendation list

Page 131: PerTrust: leveraging personality and trust for group recommendations

Chapter 7 Group recommendation: preference elicitation

116

After Ben’s recommendation list has been determined, Franck and Greg are polled in a

similar manner with the same restrictions and calculations being applied. The final outcome of

this process of top-4 recommendation retrieval is presented in Table 7.5 below.

User Top-4 recommendations Rating Trust score Trust amended rating score

Ben Union Buildings 5 stars 8 40

Church Square 3 stars 8 24

Franck Orlando Towers 5 stars 9 45

Johannesburg Zoo 4 stars 9 36

Botanical Gardens 4 stars 9 36

Freedom Park 3 stars 9 27

Greg Nelson Mandela Square 3 stars 8 24

Freedom Park 3 stars 8 24

Voortrekker Monument 3 stars 8 24

Table 7.5: Example application: Filtered top-4 recommendations for Ben, Franck, and Greg

The top-4 recommendation lists from Ben, Franck, and Greg are passed on to the next step

so that a top-4 recommendation list can be determined for Adam. This next step is detailed

below.

c) Step 3: Select the top-N recommendations

Now that the system has a top-4 filtered list of recommendations from each relevant user

polled from Adam’s trust network, the system forms a single, merged top-4 preference list for

Adam. Because Ben, Franck, and Greg are trusted users, the first step in this sub-process

entails the calculation of a predicted rating by applying the EnsembleTrustCF algorithm to

each identified recommendation item. Since the EnsembleTrustCF algorithm was detailed in

previous chapters and because the purpose of this scenario is to detail the preference

elicitation process, assume that the predicted rating for each top-N recommendation item

remains as per Table 7.5 above.

The second step in this process is to determine a final top-4 recommendation list for Adam.

This is achieved by merging the top-4 recommendation lists of each trusted user into a single

list and ordering this list from the highest rating score to the lowest rating score. Thereafter,

the top-4 recommendations in this merged list would comprise of Adam’s personalised top-4

recommendation list. To ensure that there are no duplicates in the top-4 recommendation list,

all further instances of that same recommendation type are removed. This process is detailed

in Figure 7.2 below.

Page 132: PerTrust: leveraging personality and trust for group recommendations

Chapter 7 Group recommendation: preference elicitation

117

Top-4 recommendation Rating Trust score Final score

Orlando Towers 5 stars 9 45

Union Buildings 5 stars 8 40

Botanical Gardens 4 stars 9 36

Johannesburg Zoo 4 stars 9 36

Freedom Park 3 stars 9 27

Church Square 3 stars 8 24

Freedom Park 3 stars 8 24

Nelson Mandela Square 3 stars 8 24

Voortrekker Monument 3 stars 8 24

Top-4 recommendation list Score

Orlando Towers 45

Union Buildings 40

Botanical Gardens 36

Johannesburg Zoo 36

Figure 7.2: Example application: Generation of Adam’s top-4 preference list

The final top-4 recommendation preference list returned to Adam, as determined by the preference

elicitation process, contains Orlando Towers, the Union Buildings, the Botanical Gardens, and the

Johannesburg Zoo.

7.6 Conclusion

The purpose of this chapter was to define the preference elicitation process to be implemented in the

proposed group recommender model. As a result, the chapter detailed how the EnsembleTrustCF

algorithm (Victor, 2010) is adapted for the preference elicitation process. Thereafter, an explanation

and illustration was given as to its implementation. The final result of this process is a top-N list of

recommendation preferences generated for each specific group member.

The next process to be defined in order for a final group recommendation to be calculated is that of

aggregation. Typically, one of two potential approaches could be adopted for this process. The first

option is to merely aggregate or merge the top-N lists of each group member into a single, top-N

group list (Baltrunas et al., 2010). However, this approach has a number of issues.

Once the top-N lists of each group member are combined, there are items which other group

members have not yet rated. There is thus a need to manage such a scenario.

Page 133: PerTrust: leveraging personality and trust for group recommendations

Chapter 7 Group recommendation: preference elicitation

118

The top-N lists of each group member only consider the preferences of the ego and not the

other group members (Cantador & Castells, 2012; Chen et al., 2008; Gartrell et al., 2010;

Quijano-Sanchez et al., 2013). Therefore, the social interactions, preferences, and

personalities which make up the group are ignored, making it more difficult to generate a

satisfactory group recommendation (Cantador & Castells, 2012; Chen et al., 2008; Gartrell et

al., 2010; Quijano-Sanchez et al., 2013).

The second option, and the option adopted by this research, is to split the aggregation process into

two separate steps (Baltrunas et al., 2010). In the first aggregation step, the topic of group rating

prediction is described (Baltrunas et al., 2010) to resolve both of the issues identified above.

Thereafter, in the second step, aggregation can be performed (Baltrunas et al., 2010). As a result, the

next two chapters discuss and analyse each of these steps and detail how a final, satisfactory group

recommendation can be determined whilst simultaneously solving the two issues identified above.

Page 134: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

Page 135: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

120

8.1 Introduction

In the previous chapter, the topic of preference elicitation was introduced and discussed in detail. For

the group recommendation process, this step results in a top-N list of personalised recommendation

items for each group member as determined by the modified EnsembleTrustCF algorithm (Victor,

2010).

As mentioned, the next step in calculating a final group recommendation is to determine a predicted

rating for each group member’s list of top-N recommendation items. This is necessary for two

reasons.

When each group member’s top-N recommendation list is combined, there are inevitably

recommendations that have not yet been rated by other group members. Therefore, a rating

needs to be determined for each of these recommendations.

Each of the group member’s top-N recommendation lists only consider the preferences of the

individual group members and do not consider the preferences of the other group members

and the group as a whole (Cantador & Castells, 2012; Gartrell et al., 2010; Quijano-Sanchez

et al., 2013). As a result, the needs and interests of the group in its entirety is ignored and not

considered (Cantador & Castells, 2012; Chen et al., 2008; Gartrell et al., 2010; Quijano-

Sanchez et al., 2013).

Therefore, the topic and focus of this chapter is to introduce a group rating prediction algorithm which

satisfies both of these conditions for implementation in the proposed group recommender model.

It is important to note that at this stage, the trust-based recommendation algorithms previously

covered cannot be used. Even though they may satisfy the first condition, they fail to address the

second condition. However, group-based rating prediction methods that do attempt to overcome both

of these conditions in a comprehensive manner are those that implement personality and trust

(Cantador & Castells, 2012; Gartrell et al., 2010; Quijano-Sanchez et al., 2013). In considering and

applying the personality types of other group members as well as the relevant trust relationships

between group members, a recommendation sensitive to the composition of the group can be

determined. Both of these factors as well as their relevant implementations are discussed in further

sections.

The structure of this chapter is laid out as follows. In the first section, the scenario is updated and set

for the purposes of reference and illustration in this chapter. In the second section, the topic of

personality is introduced and discussed from an implementation perspective. In the third section, trust

and its application to the group rating prediction process is detailed. In the fourth section, both of

these factors are integrated and analysed from an algorithmic perspective in terms of how each of

them can be practically applied when predicting a group rating. Once this has been completed, the

Page 136: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

121

fifth section evaluates the performance of the algorithms discussed. The final section closes with an

example application, as applied to the scenario.

8.2 Scenario

The scenario here continues from the scenario presented in the previous chapter on preference

elicitation. The scenario in the previous chapter concluded with a top-4 preference list of tourist

recommendations for group member Adam, as shown in Table 8.1 below.

Top-4 Recommendation List Score

Orlando Towers 45

Union Buildings 40

Botanical Gardens 36

Johannesburg Zoo 36

Table 8.1: Scenario – Adam’s top-4 recommendation list

Assume that after the preference elicitation process has been executed for each group member, that

the top-4 lists for each are as per Table 8.2 below.

Adam Ben Craig Darren Ed

Orlando Towers Apartheid Museum

Brightwater Commons

Gold Reef City Voortrekker Monument

Union Buildings Planetarium Freedom Park Johannesburg Zoo

Freedom Park

Botanical Gardens

Botanical Gardens

Church Square Planetarium Nelson Mandela Square

Johannesburg Zoo

Johannesburg Zoo

Voortrekker Monument

Botanical Gardens

Johannesburg Zoo

Table 8.2: Scenario - Top-4 recommendation list for each group member

Now that the top-4 preferences have been determined for each group member, they each need to be

rated. The result of the process is a rating matrix showing all possible preference combinations.

For the purposes of the scenario as well as the rating matrix, four assumptions are made.

The rating score for each group member’s top-4 recommendation list is divided by ten.

Therefore, Adam’s scores for his top-4 recommendation list would become 4.5, 4.0, 3.6, and

3.6 respectively. The purposes for this are detailed further.

For the purposes of this scenario, it is assumed that each group member’s rating score has

the same score as Adam’s. As an example, Ben’s top rating of the Apartheid Museum is 4.5,

the Planetarium is 4.0, the Botanical Gardens is 3.6, and the Johannesburg Zoo is 3.6.

Page 137: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

122

Those items which have already been personally rated by a specific user remain unchanged.

Therefore, because Adam has already visited the Apartheid Museum and has given it a rating

of 5.0 stars, the rating remains the same.

For those recommendation items where no explicit rating has been determined, a random

rating is assigned. These ratings are between 2.0 and 3.5 with jumps of 0.5 for a single

interval. In normal circumstances, such ratings are determined by making use of the

EnsembleTrustCF algorithm (Victor, 2010) as this is still considered to be the user’s individual

rating. However, because this calculation was detailed in previous chapters, these numbers

are merely assigned for the purposes of illustration.

Upon consideration of each of these assumptions, a rating matrix is formed as shown in Table 8.3

below.

Adam Ben Craig Darren Ed

Apartheid Museum 5.00 4.50 3.50 3.50 4.00

Botanical Gardens 3.60 3.60 4.00 3.60 3.50

Brightwater Commons 3.60 3.50 4.50 3.00 4.00

Church Square 3.50 3.00 3.60 2.50 3.00

Freedom Park 3.00 3.00 4.00 3.00 4.00

Gold Reef City 4.00 2.50 5.00 4.50 2.50

Johannesburg Zoo 2.50 3.60 4.00 4.00 3.60

Nelson Mandela Square 2.00 2.00 3.00 3.00 3.60

Orlando Towers 4.50 2.50 5.00 2.00 2.00

Planetarium 2.50 4.00 2.50 3.60 5.00

Union Buildings 4.00 5.00 2.00 2.50 4.00

Voortrekker Monument 3.00 3.00 3.60 3.00 4.50

Table 8.3: Scenario – Rating matrix

With reference to the above table, those ratings which are assigned random numbers, as per the

fourth assumption, have bold text applied for the purposes of easy reference.

Now that the scenario has been defined and outlaid for this chapter, the concepts of personality as

well as trust are practically and formally described, starting with personality in the next section.

8.3 Personality

In this section, the topic of social influence and personality is discussed. The section begins with a

presentation of various implementations of social influence. In the following section, a candidate

methodology for implementing social influence in this research is motivated. Thereafter, the chosen

social influence implementation is detailed. Finally, the section concludes with an algorithm detailing

Page 138: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

123

how social influence can be processed into a numeric measure for use in the proposed group

recommender model.

8.3.1 Approaches to cater for social influences

Current research identifies that group recommender systems should address the social influences

that have an effect on a group (Cantador & Castells, 2012; Chen et al., 2008; Gartrell et al., 2010;

Herr et al., 2012; Quijano-Sanchez et al., 2013). These social influences need to be considered,

leveraged, and catered for by the group recommender system if it is to calculate recommendations

which can satisfy each group member as well as the group in its entirety (Cantador & Castells, 2012;

Gartrell et al., 2010; Herr et al., 2012). Consequently, a number of different methodologies have been

defined in order to cater for these social influences.

In the research conducted by Gartrell et al. (2010) on incorporating social aspects within the process

of group recommendation, three factors affecting group recommendation are identified. These three

factors are social descriptors, expertise descriptors, and dissimilarity descriptors (Gartrell et al., 2010).

The relevant factor in this work is the social descriptor (Gartrell et al., 2010). The social descriptor

affects the group recommendation based upon the social strength between members of the group

(Gartrell et al., 2010). For example, the relationship between a husband and a wife is stronger than

the relationship between acquaintances (Gartrell et al., 2010). Social strength information is used to

determine how to process the individual top-N recommendation lists of each group member by

selecting an appropriate aggregation method.

Another research approach that caters for social influences is that of Bourke et al. (2011). In this

research, social influences are used as weightings in formulas (Bourke et al., 2011). Based upon a

number of social factors such as similarity and a high level of communication of group members with

the target user, a relevant aggregation model is weighted appropriately for the group member

concerned (Bourke et al., 2011).

An additional social weighting approach is adopted by Berkovsky and Freyne (2010) in their group-

based family recipe recommender system that suggests recipes to families at meal times (Berkovsky

& Freyne, 2010). In this research, three different social models are analysed and proposed

(Berkovsky & Freyne, 2010).

The first social model assigns social weights based on a family role where the weight for an

applicant is 0.5, the weight for a spouse is 0.3, and the weight for a child is 0.1 (Berkovsky &

Freyne, 2010).

The second social model weighs group members based upon the engagement of that group

member versus the engagement of other system users in that same role (Berkovsky &

Freyne, 2010). In this case engagement is defined as the contribution of ratings (Berkovsky &

Page 139: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

124

Freyne, 2010). Therefore, the more a particular user submits a particular rating in that

particular role, the greater the weighting (Berkovsky & Freyne, 2010).

The third social model is similar to the second social model with the exception that the

weighting is in accordance with a user’s engagement with their own family unit (Berkovsky &

Freyne, 2010).

The conclusion of this research after an evaluation is that the best performing social model is the third

one and that predefined weights should be avoided (Berkovsky & Freyne, 2010).

A more novel approach for determining social influences within a group recommender system is the

approach adopted by Quijano-Sanchez et al. (2013) and Recio-Garcıa et al. (2009). In this research,

social influences within a group are analysed from the perspective of conflict (Quijano-Sanchez et al.,

2013; Recio-Garcıa et al., 2009). The rationale for this approach is that a group recommendation

decision often requires consensus and discussion amongst the group, which can often cause conflict

within the group (Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009). This means that the

personalities within the group often can react in different ways (Quijano-Sanchez et al., 2013; Recio-

Garcıa et al., 2009). Some people are naturally more assertive and determined to get their way

(Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009). Others, however, would rather avoid the

conflict altogether and let others have their way (Quijano-Sanchez et al., 2013; Recio-Garcıa et al.,

2009). The research of Quijano-Sanchez et al. (2013) and Recio-Garcıa et al. (2009) caters for this

challenging scenario by determining a conflict personality type (Quijano-Sanchez et al., 2013; Recio-

Garcıa et al., 2009). Each member completes a Thomas-Kilmann Conflict Mode Instrument (TKI)

personality test, a well-known and commonly used test that has been designed to aid groups of

people in conflict situations (Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009; Schaubhut,

2007). The result of the TKI test is a list of dominant and least dominant personality types which can

be used as a weighting in a relevant rating prediction algorithm (Quijano-Sanchez et al., 2013; Recio-

Garcıa et al., 2009).

For this research, the approach adopted is that of Quijano-Sanchez et al. (2013) and Recio-Garcıa et

al. (2009). The motivation for adopting this approach is the topic of the next section.

8.3.2 Motivating the Thomas-Kilmann Instrument (TKI) test approach

There are a number of motivations to use the TKI test approach of Quijano-Sanchez et al. (2013) and

Recio-Garcıa et al. (2009) for this research, namely:

The application of the TKI test supports a generic approach to cater for social

influences. This means that it can be applied within any context and therefore, within any

type of group recommender system.

Page 140: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

125

The TKI test results in a personalised representation of social influence for each

specific group member. Therefore, each group member has their own personality profile

which is always kept and maintained with them.

The TKI test approach is dynamic. Irrespective of the group context a particular user finds

themselves in, this social information can always be leveraged upon and used when

determining a group recommendation.

One only has to complete the test once and it takes about ten to fifteen minutes to do

(Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009). Additionally, the benefit of having

a user’s specific personality type considered is deemed to be a worthwhile payoff with regards

to the time spent.

In the next section, the TKI test is discussed and its implementation within a group recommender

system.

8.3.3 Determining personality: The TKI test

The TKI test, developed by Kenneth W. Thomas and Ralph H. Kilmann in the early 1970’s, is a test to

aid with conflict resolution between people in groups (CPP, Inc., 2009). It has been in use for over 35

years, especially within corporate contexts, where it is a leading measure in assessing group

dynamics and assisting in conflict management (CPP, Inc., 2009; Quijano-Sanchez et al., 2013;

Recio-Garcıa et al., 2009). The test is made up of thirty multiple choice questions, each question

having two possible answers (Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009; Schaubhut,

2007). Once all these questions have been completed by the relevant person, the scores of the test

are totalled (Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009; Schaubhut, 2007). The final

result of this score is a list of dominant and least dominant personality types where the individual

being tested could reflect five possible personality types, with each personality type reflecting a

balance of two aspects: assertiveness and cooperativeness (CPP, Inc., 2009; Quijano-Sanchez et al.,

2013; Recio-Garcıa et al., 2009; Schaubhut, 2007).

According to Schaubhut (2007), assertiveness is defined as how much an individual wants to get their

own way or have their own preferences met, whereas cooperativeness is defined as how much an

individual seeks to have the preferences of other group members met (Recio-Garcıa et al., 2009;

Schaubhut, 2007). The five defined personality types are based upon how assertive and cooperative

a particular group member is (CPP, Inc., 2009; Quijano-Sanchez et al., 2013; Recio-Garcıa et al.,

2009; Schaubhut, 2007). The five personality types, as defined by the TKI test, are competing,

collaborating, compromising, avoiding and accommodating (CPP, Inc., 2009; Quijano-Sanchez et al.,

2013; Recio-Garcıa et al., 2009; Schaubhut, 2007), as shown in Figure 8.1 below.

Page 141: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

126

Competing Collaborating

Compromising

Avoiding Accommodating

Assertive

Unassertive

Uncooperative Cooperative

Ass

ert

iven

es

s

Cooperativeness

Figure 8.1: TKI personality profiles (Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009; Schaubhut,

2007)

With reference to the diagram, each personality type is briefly described below.

Competing. A competing personality type has a high level of assertiveness and a low level of

cooperativeness (CPP, Inc., 2009; Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009;

Schaubhut, 2007). A group member with this personality type argues for the sake of winning,

or stubbornly stands for what they believe is a justified cause (Recio-Garcıa et al., 2009).

Collaborating. A collaborating personality type has a high level of assertiveness and

cooperativeness (CPP, Inc., 2009; Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009;

Schaubhut, 2007). This is a personality type that is more willing to engage and be fair with

other group members to make a final decision to satisfy the whole group (Recio-Garcıa et al.,

2009).

Compromising. A compromising personality type has an average level of assertiveness and

cooperativeness (CPP, Inc., 2009; Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009;

Schaubhut, 2007). This is a personality type that, like the collaborating personality type,

attempts to come to a moderately satisfactory agreement for themselves as well as the rest of

the group (Recio-Garcıa et al., 2009).

Avoiding. An avoiding personality type has a low level of cooperativeness and assertiveness

(CPP, Inc., 2009; Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009; Schaubhut, 2007).

This personality type avoids a situation or removes themselves from a conflict situation in

order to enter at a later point (Recio-Garcıa et al., 2009).

Accommodating. An accommodating personality type has a low level assertiveness but a

high level of cooperativeness (CPP, Inc., 2009; Quijano-Sanchez et al., 2013; Recio-Garcıa et

al., 2009; Schaubhut, 2007). This personality type happily allows other group members to get

their way or unwillingly submits to the demands of a more assertive personality (Recio-Garcıa

et al., 2009).

Page 142: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

127

Once one or more dominant and least dominant personality types have determined for each individual

group member, the system needs to have the ability to leverage this information for the sake of group

recommendation. The next section focuses on how this is achieved with each personality type.

8.3.4 Applying the TKI results

Once a user has completed the TKI test and they have obtained a list of one or more dominant and

least dominant personality types, a measure or weight needs to be defined that can be used by the

system to influence the group recommendation accordingly. The approach taken by Quijano-Sanchez

et al. (2013) and Recio-Garcıa et al. (2009) makes use of measure called the conflict mode weight

(CMW) value.

The rationale behind the CMW value is that it uses the assertiveness and cooperativeness values of

each individual’s dominant and least dominant personality types to determine a single value indicating

how selfish or cooperative a group member is (Quijano-Sanchez et al., 2013; Recio-Garcıa et al.,

2009). Assertiveness and cooperativeness values are determined by referencing a mapping table

whereby each dominant and least dominant personality type has an assertiveness weighting and a

cooperativeness weighting attached (Recio-Garcıa et al., 2009). For each dominant and least

dominant personality type, the relevant assertiveness scores and cooperativeness scores are

summed together to obtain a cumulative assertiveness and cooperativeness score as input into the

CMW function (Recio-Garcıa et al., 2009). In their research, Recio-Garcıa et al. (2009), define a

mapping table with assertiveness and cooperativeness scores attached for each dominant and least

dominant personality. Since these weightings have already been defined by Recio-Garcıa et al.

(2009), this research makes use of the proposed mapping table. This mapping table is presented in

Table 8.4 below.

Personality Assertiveness Cooperativeness

Type Dominant Least dominant

Dominant Least dominant

Competing 0.375 -0.075 -0.150 0.000

Collaborating 0.375 -0.075 0.375 -0.075

Compromising 0.000 0.000 0.000 0.000

Accommodating -0.375 0.075 -0.375 0.075

Avoiding -0.150 0.000 0.375 -0.075

Table 8.4: Assertiveness and cooperativeness personality type mappings (Recio-Garcıa et al., 2009)

When a cumulative assertiveness and cooperativeness score has been calculated, the CMW value

can be determined. This CMW value is then used as an input into the final group rating prediction

algorithm. The CMW algorithm is formally defined in Formula 8.1 below.

Page 143: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

128

Those group members with a high CMW value typically have strong personalities, while those with a

low CMW value have more easy-going personalities. How this value is implemented in the rating

prediction algorithm is the subject of further sections.

In the next section, trust is discussed in order to differentiate the approach taken by this research in

incorporating trust in group recommendation.

8.4 Trust

A major consideration up to this point has been the application of trust to define a model for a group

recommender system. In terms of rating prediction within group recommendation, the effect of trust is

considered once again. In many group recommender systems, trust as a group recommendation

factor is not considered (Quijano-Sanchez et al., 2013). Therefore, the purpose of this section is to

motivate the inclusion of trust as a factor for group-based rating prediction as well as to present its

application in the group recommender system model.

In previous chapters, it was detailed how trust is an indicator of similarity (Golbeck, 2005). The more

one trusts another user, the more trustworthy the recommendation of that user is deemed to be

(Quijano-Sanchez et al., 2013). Within a group application, trust can be leveraged between the group

members themselves (Quijano-Sanchez et al., 2013). The rationale behind this is that group members

are more likely to accept the recommendations of other users if they are trusted (Quijano-Sanchez et

al., 2013). This can ultimately assist the group to come to a consensus over a group decision

(Quijano-Sanchez et al., 2013). Therefore, trust is identified as factor that can assist with a group-

based rating prediction.

To the knowledge of the author, the only algorithms that implement trust in this context are those

proposed by Quijano-Sanchez et al. (2013). Quijano-Sanchez et al. (2013) adopt a methodology

based on Facebook where they identify ten factors that can cumulatively be used to infer a trust rating

(Quijano-Sanchez et al., 2013). The factors are considerations such as the number of friends in

common, the intensity of the relationship, the length of time friends know each other, and so forth

Formula 8.1: Conflict mode weight (Quijano-Sanchez et al., 2013; Recio-Garcıa et al.,

2009)

𝐶𝑀𝑊𝑢 =1 + 𝐴𝑠𝑠𝑒𝑟𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 𝑢 − 𝐶𝑜𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 𝑢

2

For a user, u, the conflict mode weight, CMW, is defined as:

Where 𝐶𝑀𝑊𝑢 is in the range of [0,1], with 0 being very cooperative and 1 being very selfish.

Page 144: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

129

(Quijano-Sanchez et al., 2013). Each of these factors is individually weighted and multiplied with the

relevant factor (Quijano-Sanchez et al., 2013). The algorithm loops through each weighted factor,

adds it, returning a final trust value between one and zero (Quijano-Sanchez et al., 2013).

This research proposes a different approach to the implementation of trust in the group aggregation

algorithm. Instead of making use of the Facebook-based methodology, the EnsembleTrustCF

algorithm (Victor, 2010) is used. This makes a further contribution by evaluating the effectiveness of

implementing a standard trust algorithm and verifying if it improves the overall quality of group

recommendation.

In the next section, the application of both personality and trust factors are discussed from a rating

prediction perspective. This section formally gives potential approaches that can be taken by

presenting various group-based rating prediction algorithms.

8.5 Rating prediction algorithms combining personality and trust

The purpose of this section is to detail how, from an algorithmic perspective, both personality and

trust can be used to predict a group-based rating for a group member. As a result, various group-

based algorithms for rating prediction are presented and discussed.

As noted before, to the knowledge of the author, only Quijano-Sanchez et al. (2013) have to date

implemented both personality and trust in a group recommender system. Hence, to be able to decide

on a rating prediction algorithm for this research, a comprehensive review on the research and

application conducted by Quijano-Sanchez et al. (2013) is required.

In their work on group preference rating prediction algorithms within group recommender systems,

Quijano-Sanchez et al. (2013) propose three different algorithms, all of which leverage one or both of

personality and trust. The three algorithms proposed are:

1. Personality-based rating prediction,

2. Delegation-based rating prediction, and

3. Influence-based rating prediction.

Each of these algorithms are now briefly discussed and formally presented in the sections following.

8.5.1 Personality-based rating prediction

This algorithm considers the personality differences only within a group, concluding that those users

with a greater level of assertiveness are more likely to influence the group recommendation than

group members who are more cooperative (Quijano-Sanchez et al., 2013). As a result, the ratings are

weighted in accordance with the relevant personality type (Quijano-Sanchez et al., 2013). Though

Page 145: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

130

trust is not catered for in this algorithm, it does allow for impact of personality to be evaluated

(Quijano-Sanchez et al., 2013). The algorithm is formally defined in Formula 8.2 below.

As can be noted in the algorithm, the conflict mode weight differences between group members are

either enhanced or reduced depending upon the personality differences of the group (Quijano-

Sanchez et al., 2013).

8.5.2 Delegation-based rating prediction

The delegation-based rating prediction algorithm has its foundation in the assertion that group

members are more likely to consider the opinions of other group members they trust (Quijano-

Sanchez et al., 2013). As a result, trust is implemented within this algorithm as a weighting factor to

simulate this behaviour within a group context (Quijano-Sanchez et al., 2013). Additionally,

personality is also implemented as an input to the algorithm to simulate a group member’s

consideration of one another’s personality (Quijano-Sanchez et al., 2013). The algorithm which

evaluates a group recommendation item, implemented with both trust and personality, is formally

defined in Formula 8.3.

It is important to note that this algorithm can go out of bounds in terms of the range of valid, possible

ratings (Quijano-Sanchez et al., 2013). This is because the total is not divided by the entire group, but

Formula 8.2: Personality-based rating prediction (Quijano-Sanchez et al., 2013)

𝑃𝐵𝑅𝑃𝑢,𝑖 =1

𝐺 − 1. 𝑟𝑢,𝑖 + 𝐶𝑀𝑊𝑢 − 𝐶𝑀𝑊𝑣

𝑣 ∈𝐺˄ 𝑣≠𝑢

The personality-based rating prediction (PBRP) for a group member, u, and a recommendation

item, i, can be defined as:

Where |G| represents the total number of group members, 𝑟𝑢,𝑖 represents the rating for an item

i, by user, u, 𝐶𝑀𝑊𝑢 represents the conflict mode weight of user u, and 𝐶𝑀𝑊𝑣 represents the

conflict mode weight of user v.

Formula 8.3: Delegation-based rating prediction (Quijano-Sanchez et al., 2013)

𝐷𝐵𝑅𝑃𝑢,𝑖 =1

𝑡𝑢,𝑣𝑣∈𝐺 . 𝑡𝑢,𝑣 𝑟𝑣,𝑖 + 𝐶𝑀𝑊𝑣

𝑣 ∈𝐺˄ 𝑣≠𝑢

The delegation-based rating prediction (DBRP) for a group member, u, and a recommendation

item, i, can be defined as:

Where | 𝑡𝑢,𝑣𝑣∈𝐺 | represents the total number of trusted group members, 𝑡𝑢,𝑣 represents the

trust value between a user u and a user v, 𝑟𝑣,𝑖 represents the rating for an item i, by user v, and

𝐶𝑀𝑊𝑣 represents the conflict mode weight of user v.

Page 146: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

131

only those trusted members of the group and, additionally, the personality value is accumulating

(Quijano-Sanchez et al., 2013). As a result, if the rating does go out of range, the recommender

system just has to choose the closest value within the allowed rating range (Quijano-Sanchez et al.,

2013).

8.5.3 Influence-based rating prediction

In this algorithm, the basis for a group rating is dependent upon social influence (Quijano-Sanchez et

al., 2013). It attempts to simulate the behaviour whereby some group members may be influenced by

the ratings of other group members, especially by those group members they trust (Quijano-Sanchez

et al., 2013). Personality is once again considered as a more assertive and uncooperative personality

type is less likely to change their rating in comparison to a more unassertive and cooperative group

member (Quijano-Sanchez et al., 2013). The formula for this behaviour is defined in Formula 8.4

below.

In the next section, the results of an empirical evaluation conducted by Quijano-Sanchez et al. (2013)

on all three of these algorithms is presented and used as the basis for motivating a suitable rating

prediction algorithm for the proposed group recommender model.

8.6 Empirical evaluation

In this evaluation, the effectiveness of each rating prediction algorithms was evaluated with

application to an aggregation model. The aggregation model used to do the evaluation was the

average aggregation model (Amer-Yahia et al., 2009; Cantador & Castells, 2012; Carvalho &

Madedo, 2013; Gartrell et al., 2010; Garcia et al., 2012; Masthoff, 2011; Quijano-Sanchez et al.,

2013). In this model, all the group ratings for a particular item are averaged across all the group

members (Amer-Yahia et al., 2009; Cantador & Castells, 2012; Carvalho & Madedo, 2013; Gartrell et

al., 2010; Garcia et al., 2012; Masthoff, 2011; Quijano-Sanchez et al., 2013). Therefore, in the

evaluation performed by Quijano-Sanchez et al. (2013), all of the predicted ratings determined by

Formula 8.4: Influence-based rating prediction (Quijano-Sanchez et al., 2013)

𝐼𝐵𝑅𝑃𝑢,𝑖 = 𝑟𝑢,𝑖 + 1 − 𝐶𝑀𝑊𝑢 .1

𝐺 − 1. 𝑡𝑢,𝑣 𝑟𝑣,𝑖 − 𝑟𝑢,𝑖

𝑣 ∈𝐺˄ 𝑣≠𝑢

The influence-based rating prediction (IBRP) for a group member, u, and a recommendation

item, i, can be defined as:

Where |𝐺| represents the total number of group members, 𝑡𝑢,𝑣 represents the trust value

between a user u and a user v, 𝑟𝑣,𝑖 represents the rating for an item i, by user v, 𝑟𝑢,𝑖 represents

the rating for an item i, by user u, and 𝐶𝑀𝑊𝑢 represents the conflict mode weight of user u.

Page 147: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

132

each of the relevant rating prediction algorithms would merely be averaged for each specific group

member.

Next, a background discussion is given on the manner in which the evaluation was conducted.

Thereafter, the various evaluation metrics used to evaluate the performance of each algorithm is

discussed. In the section following, the cases used to conduct the performance of each algorithm is

presented. In the last section, the final results of the evaluation are presented, with the conclusion

motivating the most suitable rating prediction algorithm.

8.6.1 Background to evaluation of rating predication algorithms

The empirical evaluation for each of the above rating prediction algorithms was conducted by

Quijano-Sanchez et al. (2013) with reference to a case study focused within the movie domain. In this

case study, 58 real users participated in the evaluation. Before any prediction could begin, three steps

needed to be completed by each participant (Quijano-Sanchez et al., 2013).

Each user was asked to complete the TKI test. The result of this step was that a personality

profile for each user was determined.

Each user was given a list of fifteen movies, from the year 2009, and asked to rate on a scale

of zero to five, which ones they had seen. The point of this step was mainly for the purposes

of collaborative filtering, so that individual ratings could be determined for each group

member.

Each user was asked which movies they would have liked to have seen out of a selection of

fifty possible movies from the MovieLens dataset. The purpose of this list is used later in

determining whether the group recommendation satisfies the user’s personal preferences

(Quijano-Sanchez et al., 2013).

Once this process had occurred, users were asked to form a group and as a group, choose which top

three movies they would like to see together (Quijano-Sanchez et al., 2013). The source movie list

was the same as the list used in step three above (Quijano-Sanchez et al., 2013). The reason why

only the top three movies were selected was twofold.

To keep the evaluation clean. Quijano-Sanchez et al. (2013) found that users only care

about the top movies, while the rest are sorted almost randomly.

One can only watch one movie at a time, so it is unnecessary to include more.

The final result of this step in the evaluation was that fifteen groups were formed. There were five

groups of three, six groups of five, and four groups of nine (Quijano-Sanchez et al., 2013).

In the next section, the metrics used by Quijano-Sanchez et al. (2013) to evaluate the effectiveness of

each of their rating prediction algorithms is discussed.

Page 148: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

133

8.6.2 Evaluation metrics

In their evaluation, Quijano-Sanchez et al. (2013) were only interested in the top-3 group

recommendations. Additionally, the ordering of these items was not important. To determine the

success of an effective aggregation algorithm, a precision@N method was used for evaluation. This

metric measures how precise an aggregation algorithm is after the top-N group recommendations

have been determined. Therefore, Quijano-Sanchez et al. (2013) make use of precision@3 which

determines how many of the top-3 calculated group movie recommendations there are in the top-3 list

chosen by the group beforehand (Quijano-Sanchez et al., 2013).

Similarly, another metric used is success@N which returns a value of one if there is at least one

match between the top-N generated group recommendation list and the group’s predefined top-N

recommendation list. This method can additionally determine a percentage whereby it can be

established how many of the groups have at least one or two generated group recommendations

matching the group lists. Therefore, a returned percentage of 80% for success@3, means that 80% of

the groups had at least one match with their top-3 personal recommendation lists (Quijano-Sanchez

et al., 2013).

8.6.3 Test cases

In the evaluation of their rating prediction algorithms, Quijano-Sanchez et al. (2013) defined six

different test cases.

Base: This is the baseline algorithm used. In this case, predicted ratings are determined by

making use of collaborative filtering (Quijano-Sanchez et al., 2013).

Personality: This algorithm is the personality-based rating prediction algorithm detailed

above in Formula 8.2. This algorithm allows the impact of personality to be assessed

independently (Quijano-Sanchez et al., 2013).

Trust delegation-based rating prediction (TDBR): This algorithm makes use of the

delegation-based rating prediction algorithm, as defined in Formula 8.3, with the only

exception being that the personality factor ( ) is zeroed out. As a result, the impact of

trust alone is evaluated (Quijano-Sanchez et al., 2013).

Trust influence-based rating prediction (TIBR): This algorithm makes use of the influence-

based rating prediction algorithm, as defined in Formula 8.4. However, similar to the TDBR

algorithm, the personality factor ( ) is also zeroed out (Quijano-Sanchez et al., 2013).

Trust personality delegation-based rating prediction (TPDBR): This algorithm makes use

of the full delegation-based rating prediction algorithm, as defined in Formula 8.3 (Quijano-

Sanchez et al., 2013).

Trust personality influence-based rating prediction (TPIBR): This algorithm makes use of

the full Influence-based Rating Prediction algorithm, as defined in Formula 8.4 (Quijano-

Sanchez et al., 2013).

Page 149: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

134

Next, the results of the evaluation for each test case is presented and discussed.

8.6.4 Results

In these tests, Quijano-Sanchez et al. (2013) conducted four different evaluations on their rating

prediction algorithms.

Determine the precision of each rating prediction algorithm.

Determine the impact of the group size on each rating prediction algorithm.

Identify how the rating prediction algorithms would perform with groups of different

homogeneity.

Determine how the rating prediction algorithms would perform with groups of varying trust

levels.

The final results for each of these tests are presented in the sections below (Quijano-Sanchez et al.,

2013).

a) Precision

In the first evaluation, Quijano-Sanchez et al. (2013) tested the precision of each of the test

case rating prediction algorithms by determining how many matches there were between the

group’s actual recommendation list and the system’s generated group recommendation list for

each rating prediction algorithm (Quijano-Sanchez et al., 2013). The results of this evaluation

are listed in Figure 8.2 below.

Figure 8.2: Precision test (Quijano-Sanchez et al., 2013)

From Figure 8.2, it is noted that most of the rating prediction algorithms performed reasonably

well for a single match. However, the results for two matches are far less, but a greater

determinant of performance in this evaluation. This is because it is harder to obtain two

Page 150: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

135

matches than it is to obtain a single match. As a result, the best overall performing algorithm,

with regards to accuracy, is the TPDBR algorithm. It performs best with the highest

percentage of one match as well as two matches.

The interesting thing to note from these results is the performance of the trust only and

personality only algorithms. The personality only algorithm performs quite well, giving weight

to the importance of the consideration of personality. However, the trust only algorithms are

not as strong in of themselves. However, when combined together, especially within the

TPDBR algorithm, the effect is noticeable (Quijano-Sanchez et al., 2013).

b) Group size

In the next evaluation conducted by Quijano-Sanchez et al. (2013), a test was conducted on

all the rating prediction algorithms to determine how an algorithm is affected by the number of

people within a group. As mentioned previously, fifteen groups were formed. Of that fifteen,

there were five groups of three, six groups of five, and four groups of nine. Consequently, this

evaluation involved the application of each rating prediction algorithm to each group size

(Quijano-Sanchez et al., 2013).

The final result, presented in Figure 8.3 below, shows, per group size per algorithm, the

percentage of times there was at least one match between the group’s actual movie

recommendation list and the system generated group movie recommendation list (Quijano-

Sanchez et al., 2013).

Figure 8.3: Group size test (Quijano-Sanchez et al., 2013)

From Figure 8.3, it can be seen how the TPDBR algorithm is the best performing algorithm in

the test, once again. This is because it maintains a high performance, irrespective of the

group size. The other algorithms either perform well in one category, but then deteriorate as

the groups get larger or there is no consistency, such as with the TPIBR algorithm (Quijano-

Sanchez et al., 2013).

0%

20%

40%

60%

80%

100%

120%

3 People

5 People

9 People

Page 151: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

136

According to Quijano-Sanchez et al. (2013), the most likely explanation for the performance of

the TPIBR algorithm is that the more considerations there are to be made, the more noise

there is in terms of making a prediction and the more difficult it becomes to generate a rating

which reflects the preferences of the group. For small to medium group sizes, however, this

information becomes useful. Nevertheless, when evaluating the overall performance, the

TPDBR algorithm still performs best (Quijano-Sanchez et al., 2013).

c) Homogeneity test

Another evaluation conducted by Quijano-Sanchez et al. (2013), analyses how personality

variances within a group affect the performance of a rating prediction algorithm. In order to

conduct this evaluation, two group classifications were created: uniform and non-uniform.

Those within the uniform group had a low variance in personality, whilst those within the non-

uniform group had a high variance in personality (Quijano-Sanchez et al., 2013). The results

of this evaluation are presented in Figure 8.4 below.

Figure 8.4: Homogeneity test (Quijano-Sanchez et al., 2013)

Again, the TPDBR algorithm is the best performing algorithm across groups of varying

homogeneity. For each algorithm, it is quite easy to have at least one match within a

homogenous group. This is because, if there is little difference in the personality types of each

group member, then that group would most probably like similar things (Baltrunas et al., 2010;

Quijano-Sanchez et al., 2013).

However, the true test of a group recommender comes in when one considers groups with

varying personalities and interests. When this is taken into account, there is quite a dramatic

decrease in performance on most of the algorithms. The only two algorithms which maintain

their performance across both uniform and non-uniform groups are the TPDBR algorithm and

the TPIBR algorithm. However, the average performance of the TPDBR algorithm is higher

for non-uniform groups (Quijano-Sanchez et al., 2013).

Page 152: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

137

d) Trust strength test

In this test, Quijano-Sanchez et al. (2013) evaluated how trust levels within a group affect the

performance of each rating prediction algorithm. The trust strength was evaluated by

averaging the trust values between each group member within each group. Those groups

with trust levels above the average were considered to be ones with high trust levels, while

those with trust levels below the average were considered to be ones with low trust levels

(Quijano-Sanchez et al., 2013). The performance of each rating prediction algorithm with

application to each of these groups is shown in Figure 8.5 below.

Figure 8.5: Trust strength test (Quijano-Sanchez et al., 2013)

The interesting thing about these results is the performance of the trust-based algorithms i.e.

TDBR and TIBR. These algorithms were part of the worst performing algorithms in this test

case. This shows that, for the evaluation, trust alone is not an all-encompassing solution for a

group recommender system (Quijano-Sanchez et al., 2013).

However, when trust is complemented with personality, the performance greatly increases.

This shows the benefit of considering these two factors together. Once again, in this

evaluation, the TPDBR algorithm is the best performing algorithm and the most robust across

both of these categories (Quijano-Sanchez et al., 2013).

In conclusion, the algorithm which performed best in each of these categories, independent of the

context or scenario was the trust personality delegation-based rating prediction algorithm (TPDBR).

There are two main reasons for this.

The TPDBR does not just leverage personality or just leverage trust, but rather

leverages them both together. Therefore, the benefits of both trust and personality are

combined and harnessed. Additionally, this is more intuitive as well because while one may

Page 153: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

138

trust specific people within a group context, one always takes into consideration the individual

personalities of the group as a whole (Quijano-Sanchez et al., 2013).

When compared to the trust personality influence-based rating prediction algorithm

(TPIBR), it is more straightforward in its consideration of both trust and personality.

While the TPIBR algorithm does consider trust and personality, it does this based on the

variance of other ratings with group members (Quijano-Sanchez et al., 2013).

Therefore, because of these reasons as well as the overall algorithm performance, the TPDBR

algorithm is motivated as the most suitable group rating prediction algorithm for this research.

In the section following, the TPDBR algorithm is illustrated with reference to the scenario presented in

section 8.2.

8.7 Example application

In section 8.2, the scenario concluded with the top-4 recommendations of each group member being

determined and a rating matrix being formed. In this section, the scenario continues with reference to

TPDBR algorithm as well as the average aggregation model.

Therefore, the section begins by illustrating the means of determining the personality of each group

member as well as the trust values between group members. Thereafter, the TPDBR algorithm is

applied to Adam’s group with a final top-3 recommendation presented to the group.

8.7.1 Personality

The personality of each group member is determined by using the TKI test to derive a conflict mode

weight (CMW) value. Therefore, the first step is to derive a personality type based on the TKI test. For

this scenario, assume that each group member has taken the TKI test and that a dominant and least

dominant personality has been determined for each as per Table 8.5 below.

User Dominant personality type Least dominant personality type

Adam Collaborating Compromising

Ben Collaborating None

Craig Avoiding Accommodating

Darren Competing Accommodating

Ed Collaborating None

Table 8.5: Example application: Dominant and least dominant personalities for Adam’s group

Next, a CMW value is determined for each group member. To obtain this value, an assertiveness and

cooperativeness score is calculated. The means of calculating these scores is to map each

Page 154: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

139

personality type to a relevant assertiveness score and cooperativeness score. Thereafter, all the

assertiveness scores are summed together and all the cooperativeness scores are summed together.

Hence, a final accumulated assertiveness score and cooperativeness score is calculated.

For this scenario, each personality type is mapped in accordance with the mapping table defined by

Recio-Garcia et al. (2009). This table is defined in Table 8.6 below.

Personality Assertiveness Cooperativeness

Type Dominant Least dominant

Dominant Least dominant

Competing 0.375 -0.075 -0.150 0.000

Collaborating 0.375 -0.075 0.375 -0.075

Compromising 0.000 0.000 0.000 0.000

Accommodating -0.375 0.075 -0.375 0.075

Avoiding -0.150 0.000 0.375 -0.075

Table 8.6: Example application: Assertiveness and cooperativeness personality type mappings (Recio-

Garcıa et al., 2009)

Therefore, based on Adam’s personality profile, an assertiveness score is calculated as per

Calculation 8.1 below.

Similarly, Adam’s cooperativeness score is calculated as per Calculation 8.2 below.

Once an assertiveness and cooperativeness score has been calculated for Adam, a corresponding

conflict weight mode is calculated. This is done as per the conflict mode weight definition given in

Formula 8.1 and implemented as per Calculation 8.3 below.

Calculation 8.1: Example application – Adam’s assertiveness score

𝑎𝑠𝑠𝑒𝑟𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 𝐴𝑑𝑎𝑚 = 𝑑𝑜𝑚𝑖𝑛𝑎𝑛𝑡 𝑐𝑜𝑙𝑙𝑎𝑏𝑜𝑟𝑎𝑡𝑖𝑛𝑔 + 𝑙𝑒𝑎𝑠𝑡 𝑑𝑜𝑚𝑖𝑛𝑎𝑛𝑡 𝑐𝑜𝑚𝑝𝑟𝑜𝑚𝑖𝑠𝑖𝑛𝑔

𝑎𝑠𝑠𝑒𝑟𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 𝐴𝑑𝑎𝑚 = 0.375 + 0.000

𝑎𝑠𝑠𝑒𝑟𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 𝐴𝑑𝑎𝑚 = 0.375

Calculation 8.2: Example application – Adam’s cooperativeness score

𝑐𝑜𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 𝐴𝑑𝑎𝑚 = 𝑑𝑜𝑚𝑖𝑛𝑎𝑛𝑡 𝑐𝑜𝑙𝑙𝑎𝑏𝑜𝑟𝑎𝑡𝑖𝑛𝑔 + 𝑙𝑒𝑎𝑠𝑡 𝑑𝑜𝑚𝑖𝑛𝑎𝑛𝑡 𝑐𝑜𝑚𝑝𝑟𝑜𝑚𝑖𝑠𝑖𝑛𝑔

𝑐𝑜𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 𝐴𝑑𝑎𝑚 = 0.375 + 0.000

𝑐𝑜𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 𝐴𝑑𝑎𝑚 = 0.375

Page 155: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

140

Therefore, Adam’s final CMW value of 0.500 indicates that Adam is neither entirely assertive, nor

entirely cooperative, but in between the two factors.

Assuming a similar process is followed for each member of Adam’s group, a CMW value is

determined for each group member as per Table 8.7 below.

User Dominant personality type

Least dominant personality type

Assertiveness Score

Cooperativeness Score

CMW value

Adam Collaborating Compromising 0.375 0.375 0.500

Ben Collaborating None 0.375 0.375 0.500

Craig Avoiding Accommodating -0.075 0.450 0.238

Darren Competing Accommodating 0.450 -0.075 0.763

Ed Collaborating None 0.375 0.375 0.500

Table 8.7: Example application: CMW values for Adam’s group

Next, the trust between group members is defined for this example application.

8.7.2 Trust

Trust scores are either explicitly determined from Figure 7.1 in the previous chapter or are randomly

assigned. Typically, the trust values between group members can be inferred by making use of the

MoleTrust algorithm (Massa & Avesani, 2007a). However, for the purposes of this example

application, those trust values not explicitly defined are randomly assigned. In addition, note that the

trust values have also been divided by ten. This is so that a correctly bounded rating prediction can be

determined for each group member.

Calculation 8.3: Example application – Adam’s CMW value

𝐶𝑀𝑊𝐴𝑑𝑎𝑚 =1 + 𝐴𝑠𝑠𝑒𝑟𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 𝐴𝑑𝑎𝑚 − 𝐶𝑜𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 𝐴𝑑𝑎𝑚

2

𝐶𝑀𝑊𝐴𝑑𝑎𝑚 =1 + 0.375 − 0.375

2

𝐶𝑀𝑊𝐴𝑑𝑎𝑚 =1

2

𝐶𝑀𝑊𝐴𝑑𝑎𝑚 = 0.500

Page 156: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

141

Therefore, the trust scores for each group member for this scenario application are as per Table 8.7

below.

Adam Ben Craig Darren Ed

Adam N/A 0.800 0.750 0.700 0.850

Ben 0.700 N/A 0.900 0.600 0.650

Craig 0.800 0.850 N/A 0.500 0.600

Darren 0.700 0.750 0.800 N/A 0.400

Ed 0.950 0.800 0.650 0.800 N/A

Table 8.8: Example application: Group trust scores

Now that both the personality and trust factors have been calculated for Adam’s group, the rating

prediction process can occur. This step is detailed in the section following.

8.7.3 Determining a group recommendation with the TPDBR algorithm

Since all prerequisites are defined, the group recommendation process can be executed for Adam’s

group through the application of the TPDBR algorithm. In order to do this, however, a predicted rating

needs to be calculated for each recommendation item. For this to happen though, a number of steps

are to be completed.

In this scenario application, consider the TPDBR output for the Apartheid Museum for Adam. The

TPDBR algorithm for this recommendation item is as per Calculation 8.4 below.

The first part of the algorithm requires that the total number of trust users be determined. Typically

such a calculation would include some threshold trust value. However, for the purposes of this

scenario application, assume that every other group member is trusted. This value would, therefore,

be determined as per Calculation 8.5 below.

Calculation 8.4: Example application – TPDBR algorithm

𝐷𝐵𝑅𝑃𝐴𝑑𝑎𝑚,𝐴𝑀 =1

𝑡𝐴𝑑𝑎𝑚,𝑣𝑣∈𝐺 . 𝑡𝐴𝑑𝑎𝑚 ,𝑣 𝑟𝑣,𝐴𝑀 + 𝐶𝑀𝑊𝑣

𝑣 ∈𝐺˄ 𝑣≠𝐴𝑑𝑎𝑚

Page 157: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

142

The next part of the TPDBR algorithm is to determine the personality and trust-based rating for every

other group member outside of Adam. This is determined as per Calculation 8.6 below.

With these two values now calculated, the final predicted rating for the Apartheid museum for Adam

can be determined. This final predicted rating is shown in Calculation 8.7 below.

The final predicted rating for Adam for the Apartheid Museum, as determined by the TPDBR

algorithm, is 3.40. In other words, the predicted calculated rate for Adam for the Apartheid Museum,

when considering the personality and trust of the rest of the group, is 3.40.

Calculation 8.5: Example application – Total trusted group members

1

𝑡𝐴𝑑𝑎𝑚,𝑣𝑣∈𝐺 =

1

{𝐵𝑒𝑛,𝐶𝑟𝑎𝑖𝑔,𝐷𝑎𝑟𝑟𝑒𝑛,𝐸𝑑}

1

𝑡𝐴𝑑𝑎𝑚 ,𝑣𝑣∈𝐺 =

1

4

1

𝑡𝐴𝑑𝑎𝑚 ,𝑣𝑣∈𝐺 = 0.25

Calculation 8.6: Example application – Trust and personality affected rating

𝑡𝐴𝑑𝑎𝑚,𝑣 𝑟𝑣,𝐴𝑀 + 𝐶𝑀𝑊𝑣

𝑣 ∈𝐺˄ 𝑣≠𝐴𝑑𝑎𝑚

= 13.61

𝑡𝐴𝑑𝑎𝑚,𝑣 𝑟𝑣,𝐴𝑀 + 𝐶𝑀𝑊𝑣 𝑣 ∈𝐺˄ 𝑣≠𝐴𝑑𝑎𝑚 = 𝑡𝐴𝑑𝑎𝑚 ,𝐵𝑒𝑛 𝑟𝐵𝑒𝑛,𝐴𝑀 + 𝐶𝑀𝑊𝐵𝑒𝑛 + 𝑡𝐴𝑑𝑎𝑚 ,𝐶𝑟𝑎𝑖𝑔 𝑟𝐶𝑟𝑎𝑖𝑔,𝐴𝑀 + 𝐶𝑀𝑊𝐶𝑟𝑎𝑖𝑔 +

𝑡𝐴𝑑𝑎𝑚,𝐷𝑎𝑟𝑟𝑒𝑛 𝑟𝐷𝑎𝑟𝑟𝑒𝑛,𝐴𝑀 + 𝐶𝑀𝑊𝐷𝑎𝑟𝑟𝑒𝑛 + 𝑡𝐴𝑑𝑎𝑚,𝐸𝑑 𝑟𝐸𝑑,𝐴𝑀 + 𝐶𝑀𝑊𝐸𝑑

𝑡𝐴𝑑𝑎𝑚,𝑣 𝑟𝑣,𝐴𝑀 + 𝐶𝑀𝑊𝑣 𝑣 ∈𝐺˄ 𝑣≠𝐴𝑑𝑎𝑚 = 0.8 4.5 + 0.5 + 0.75 3.5 + 0.238 + 0.7 3.5 + 0.763 + 0.85 4.0 +

0.5

𝑡𝐴𝑑𝑎𝑚,𝑣 𝑟𝑣,𝐴𝑀 + 𝐶𝑀𝑊𝑣 𝑣 ∈𝐺˄ 𝑣≠𝐴𝑑𝑎𝑚 = 0.8 5.0 + 0.75 3.738 + 0.7 4.263 + 0.85 4.5

𝑡𝐴𝑑𝑎𝑚,𝑣 𝑟𝑣,𝐴𝑀 + 𝐶𝑀𝑊𝑣 𝑣 ∈𝐺˄ 𝑣≠𝐴𝑑𝑎𝑚 = 4.0 + 2.8 + 2.98 + 3.83

Calculation 8.7: Example application – Final predicted rating

𝐷𝐵𝑅𝑃𝐴𝑑𝑎𝑚,𝐴𝑀 =1

𝑡𝐴𝑑𝑎𝑚,𝑣 𝑣∈𝐺. 𝑡𝐴𝑑𝑎𝑚,𝑣 𝑟𝑣,𝐴𝑀 + 𝐶𝑀𝑊𝑣

𝑣 ∈𝐺˄ 𝑣≠𝐴𝑑𝑎𝑚

𝐷𝐵𝑅𝑃𝐴𝑑𝑎𝑚 ,𝐴𝑀 = 0.25 13.61

𝐷𝐵𝑅𝑃𝐴𝑑𝑎𝑚,𝐴𝑀 = 3.40

Page 158: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

143

Assume that this same process is run for each group member within Adam’s group as well as for

each recommendation item retrieved as part of the preference elicitation process. The result of such a

process gives the TPDBR rating scores as in Table 8.9 below.

Recommendation Item Adam Ben Craig Darren Ed

Apartheid Museum 3.40 3.17 3.37 3.10 3.77

Botanical Gardens 3.23 2.98 2.84 2.73 3.35

Brightwater Commons 3.30 3.08 2.82 2.87 3.30

Church Square 2.73 2.62 2.48 2.47 2.93

Freedom Park 3.11 2.86 2.59 2.57 2.97

Gold Reef City 3.14 3.24 2.65 2.70 3.57

Johannesburg Zoo 3.32 2.86 2.68 2.55 3.17

Nelson Mandela Square 2.64 2.40 2.12 1.96 2.37

Orlando Towers 2.60 2.87 2.36 2.74 3.19

Planetarium 3.35 2.69 2.93 2.47 2.93

Union Buildings 3.05 2.51 3.15 2.72 3.18

Voortrekker Monument 3.14 2.85 2.66 2.54 2.91

Table 8.9: Example application: Group TPDBR scores

Now that the group predicted ratings have been determined for each recommendation item for each

group member, the last part of the group recommendation process can begin. As noted in the

implementation of Quijano-Sanchez et al. (2013), this last part involves the application of the average

aggregation model. Therefore, in order to apply this model to this scenario, all of the recommendation

items, listed in Table 8.9, have to be averaged out across each recommendation item. The output of

this results in the aggregated list as per Table 8.10 below.

Recommendation item Rating

Apartheid Museum 3.36

Botanical Gardens 3.03

Brightwater Commons 3.07

Church Square 2.65

Freedom Park 2.82

Gold Reef City 3.06

Johannesburg Zoo 2.92

Nelson Mandela Square 2.30

Orlando Towers 2.75

Planetarium 2.87

Union Buildings 2.92

Voortrekker Monument 2.82

Table 8.10: Example application: Aggregated TPDBR scores

Page 159: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

144

In order to finally present a list of the top-4 recommendation items to the group, the top-4 items in the

aggregated list are simply presented to the group. Therefore, the final group recommendation, as

determined by the TPDBR rating prediction algorithm and the average aggregation model is as per

Table 8.11 below.

Group

Apartheid Museum

Brightwater Commons

Gold Reef City

Botanical Gardens

Table 8.11: Example application: Final group recommendation

8.8 Conclusion

In this chapter, the main focus of discussion revolved around group-based rating prediction and how

both personality and trust could be practically incorporated into the process of group

recommendation. Therefore, personality was first introduced and discussed with the approach

implemented by Quijano-Sanchez et al. (2013) for personality motivated for this research. In this

approach, the TKI test is used as a means to determine personality, with the CMW value being the

numeric weight value used when considering personality from an implementation perspective.

In the section following on from this, the concept of trust, from a group perspective, was introduced.

The motivation for using trust in a group context was given and the EnsembleTrustCF algorithm

(Victor, 2010) was chosen as the group-based trust algorithm for the proposed group recommender

model.

With the design choices for the implementations of personality and trust decided upon, various rating

prediction algorithms, developed by Quijano-Sanchez et al. (2013), were introduced in the next

section. These algorithms implemented one or both of trust and personality. Each of these rating

prediction algorithms were evaluated with reference to various criteria, resulting in the trust

personality delegation-based rating prediction (TPDBR) algorithm being motivated for this research.

The chapter concluded by illustrating the rating prediction process in a scenario application.

The purpose of identifying a rating prediction algorithm is to determine the influence of both

personality and trust and to evaluate what affect it has on the group recommendation process when

both of these are used. The results of the evaluation by Quijano-Sanchez et al. (2013) are

encouraging, however, it is hoped that through the application of trust with a more standardised trust

algorithm, in both the preference elicitation process as well as the group-based rating prediction

process that these results can be improved upon.

Page 160: PerTrust: leveraging personality and trust for group recommendations

Chapter 8 Group recommendation: rating prediction

145

In the next section, the topic of aggregation is discussed. This is another area where it is hoped that

the performance of group recommendation can be improved upon. In this chapter, the average

aggregation model was briefly introduced. However, there are many other aggregation models to be

considered, which inevitably affect the final results of a group recommendation. As a result, these

aggregation models are analysed and discussed in the next section.

Page 161: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

Page 162: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

147

9.1 Introduction

Aggregation is a group recommendation process which considers the individual preferences of each

group member and determines a satisfactory recommendation or set of recommendations for the

group as a whole (Bourke et al., 2011; Jameson & Smyth, 2007). Because there are a number of

potential methods to implement group aggregation, the purpose of this chapter is to introduce and

review the different aggregation methodologies that can be implemented by a group recommender

system. This review assists to determine a suitable aggregation model for the proposed group

recommender model.

This chapter is structurally broken down as follows. In the first section, the scenario introduced in

chapter 8 is extended for the purposes of aggregation. In the following section, a total of eleven

aggregation models are introduced and applied with reference to the scenario. Finally, the chapter

concludes with an evaluation of these aggregation models and a final conclusion is given as to which

aggregation model is best for this research.

9.2 Scenario

In the previous chapter, it was detailed how both personality and trust can be leveraged in order to

predict group ratings for various recommendation items. This was illustrated with reference to Adam’s

group, whereby a list of ratings was predicted for a number of Johannesburg based tourist locations.

The final result of this process was a rating matrix as per Table 9.1 below.

Recommendation Item Adam Ben Craig Darren Ed

Apartheid Museum 3.40 3.17 3.37 3.10 3.77

Botanical Gardens 3.23 2.98 2.84 2.73 3.35

Brightwater Commons 3.30 3.08 2.82 2.87 3.30

Church Square 2.73 2.62 2.48 2.47 2.93

Freedom Park 3.11 2.86 2.59 2.57 2.97

Gold Reef City 3.14 3.24 2.65 2.70 3.57

Johannesburg Zoo 3.32 2.86 2.68 2.55 3.17

Nelson Mandela Square 2.64 2.40 2.12 1.96 2.37

Orlando Towers 2.60 2.87 2.36 2.74 3.19

Planetarium 3.35 2.69 2.93 2.47 2.93

Union Buildings 3.05 2.51 3.15 2.72 3.18

Voortrekker Monument 3.14 2.85 2.66 2.54 2.91

Table 9.1: General aggregation models: Scenario

The next step in the group recommendation process is to determine a group recommendation for

Adam’s group by taking the ratings illustrated above, and processing these into a final list of

Page 163: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

148

recommendations. For this scenario, ratings are processed into a top-4 recommendation list for the

group. The way by which this is done is covered in the next section.

9.3 Aggregation models

The concept of group aggregation is now introduced by analysing a number of aggregation models.

There are a number of possible aggregation models to use, each having their own levels of

complexities and methods of implementation. The purpose of this section, therefore, is to present

these models and apply them to the scenario.

The following 11 aggregation models are analysed in this section: the additive utilitarian model, the

multiplicative utilitarian model, the average model, the average without misery model, the least misery

model, the most pleasure model, the fairness model, the plurality voting model, the approval voting

model, the Borda count model, and the Copeland rule model (Cantador & Castells, 2012; Masthoff,

2011).

9.3.1 Additive utilitarian model

In the additive utilitarian model, individual recommended ratings for each group member for each

recommendation item are summed (Cantador & Castells, 2012). The aggregated recommendation list

returned by the system is the list of recommendations ordered from the highest value to the lowest

value. The concern with this aggregation model, however, is that the user’s individual preferences

become less considered as the group grows larger in size (Cantador & Castells, 2012).

As a result, with reference to the scenario in Section 9.2, the ratings of Adam, Ben, Craig, Darren, and

Ed are aggregated, by applying the additive utilitarian model, as per Table 9.2 below.

Page 164: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

149

Recommendation Item Adam Ben Craig Darren Ed Group Total

Apartheid Museum 3.40 3.17 3.37 3.10 3.77 16.81

Botanical Gardens 3.23 2.98 2.84 2.73 3.35 15.13

Brightwater Commons 3.30 3.08 2.82 2.87 3.30 15.36

Church Square 2.73 2.62 2.48 2.47 2.93 13.23

Freedom Park 3.11 2.86 2.59 2.57 2.97 14.10

Gold Reef City 3.14 3.24 2.65 2.70 3.57 15.30

Johannesburg Zoo 3.32 2.86 2.68 2.55 3.17 14.58

Nelson Mandela Square 2.64 2.40 2.12 1.96 2.37 11.49

Orlando Towers 2.60 2.87 2.36 2.74 3.19 13.75

Planetarium 3.35 2.69 2.93 2.47 2.93 14.36

Union Buildings 3.05 2.51 3.15 2.72 3.18 14.61

Voortrekker Monument 3.14 2.85 2.66 2.54 2.91 14.10

Table 9.2: General aggregation models: Additive utilitarian model

The returned recommendation list for the group is ordered and the top four recommendations are as

follows: The Apartheid Museum, Brightwater Commons, Gold Reef City, and the Botanical Gardens.

9.3.2 Multiplicative utilitarian model

The multiplicative utilitarian model is quite similar to the additive utilitarian model, except that each

group member’s recommendations are multiplied together, instead of summed together (Bourke et al.,

2011; Cantador & Castells, 2012; Masthoff, 2011; Salamó et al., 2012). The final result, however, is

the same as per the additive utilitarian model. The concern with this model is that in smaller groups,

an individual’s recommendation can have too great an impact on the group (Cantador & Castells,

2012).

The aggregated list, as determined by the multiplicative utilitarian model, with reference to the

scenario is shown in Table 9.3.

Recommendation Item Adam Ben Craig Darren Ed Group Total

Apartheid Museum 3.40 3.17 3.37 3.10 3.77 424.66

Botanical Gardens 3.23 2.98 2.84 2.73 3.35 249.85

Brightwater Commons 3.30 3.08 2.82 2.87 3.30 270.38

Church Square 2.73 2.62 2.48 2.47 2.93 128.50

Freedom Park 3.11 2.86 2.59 2.57 2.97 175.71

Gold Reef City 3.14 3.24 2.65 2.70 3.57 259.60

Johannesburg Zoo 3.32 2.86 2.68 2.55 3.17 206.04

Nelson Mandela Square 2.64 2.40 2.12 1.96 2.37 62.31

Orlando Towers 2.60 2.87 2.36 2.74 3.19 153.58

Page 165: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

150

Planetarium 3.35 2.69 2.93 2.47 2.93 190.24

Union Buildings 3.05 2.51 3.15 2.72 3.18 208.62

Voortrekker Monument 3.14 2.85 2.66 2.54 2.91 176.00

Table 9.3: General aggregation models: Multiplicative utilitarian model

The returned recommendation list for the group is ordered and the top four recommendations are as

follows: The Apartheid Museum, Brightwater Commons, Gold Reef City, and the Botanical Gardens.

9.3.3 Average model

In the average aggregation model, individual ratings for each group member are averaged together

(Amer-Yahia et al., 2009; Cantador & Castells, 2012; Carvalho & Madedo, 2013; Gartrell et al., 2010;

Garcia et al., 2012; Masthoff, 2011; Quijano-Sanchez et al., 2013). This is the aggregation model

which was detailed in the scenario application in the previous chapter, and is model commonly

followed by a number of group recommender systems (Carvalho & Madedo, 2013; Gartrell et al.,

2010; Kim et al., 2010; Quijano-Sanchez et al., 2013). The output after this model, with reference to

the scenario, is shown in Table 9.4 below.

Recommendation Item Adam Ben Craig Darren Ed Group Total

Apartheid Museum 3.40 3.17 3.37 3.10 3.77 3.36

Botanical Gardens 3.23 2.98 2.84 2.73 3.35 3.03

Brightwater Commons 3.30 3.08 2.82 2.87 3.30 3.07

Church Square 2.73 2.62 2.48 2.47 2.93 2.65

Freedom Park 3.11 2.86 2.59 2.57 2.97 2.82

Gold Reef City 3.14 3.24 2.65 2.70 3.57 3.06

Johannesburg Zoo 3.32 2.86 2.68 2.55 3.17 2.92

Nelson Mandela Square 2.64 2.40 2.12 1.96 2.37 2.30

Orlando Towers 2.60 2.87 2.36 2.74 3.19 2.75

Planetarium 3.35 2.69 2.93 2.47 2.93 2.87

Union Buildings 3.05 2.51 3.15 2.72 3.18 2.92

Voortrekker Monument 3.14 2.85 2.66 2.54 2.91 2.82

Table 9.4: General aggregation models: Average model

The returned recommendation list for the group is ordered and the top four recommendations are as

follows: The Apartheid Museum, Brightwater Commons, Gold Reef City, and the Botanical Gardens.

9.3.4 Average without misery model

In the average without misery aggregation model, the average is calculated similar to the average

model. However, a threshold rating is additionally considered (Cantador & Castells, 2012; Garcia et

Page 166: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

151

al., 2012; Masthoff, 2011). Therefore, all ratings below a predefined threshold are not considered to

ensure that there are no recommendations which certain group members completely dislike

(Cantador & Castells, 2012; Masthoff, 2011).

An example of this model with a threshold of 2.5 is shown in Table 9.5 below.

Recommendation Item Adam Ben Craig Darren Ed Group Total

Apartheid Museum 3.40 3.17 3.37 3.10 3.77 3.36

Botanical Gardens 3.23 2.98 2.84 2.73 3.35 3.03

Brightwater Commons 3.30 3.08 2.82 2.87 3.30 3.07

Church Square 2.73 2.62 2.48 2.47 2.93 N/A

Freedom Park 3.11 2.86 2.59 2.57 2.97 2.82

Gold Reef City 3.14 3.24 2.65 2.70 3.57 3.06

Johannesburg Zoo 3.32 2.86 2.68 2.55 3.17 2.92

Nelson Mandela Square 2.64 2.40 2.12 1.96 2.37 N/A

Orlando Towers 2.60 2.87 2.36 2.74 3.19 N/A

Planetarium 3.35 2.69 2.93 2.47 2.93 N/A

Union Buildings 3.05 2.51 3.15 2.72 3.18 2.92

Voortrekker Monument 3.14 2.85 2.66 2.54 2.91 2.82

Table 9.5: General aggregation models: Average without misery model

The returned recommendation list for the group is ordered and the top four recommendations are as

follows: The Apartheid Museum, Brightwater Commons, Gold Reef City, and the Botanical Gardens.

9.3.5 Least misery model

In the least misery aggregation model, the rating returned for the group is the lowest rating for a

particular recommendation item (Amer-Yahia et al., 2009; Bourke et al., 2011; Cantador & Castells,

2012; Carvalho & Madedo, 2013; Gartrell et al., 2010; Masthoff, 2011; Salamó et al., 2012).

Therefore, as stated by Cantador and Castells (2012), the group rating represents the rating of the

least satisfied member. The result of this aggregation model, with application to the scenario, is

shown in Table 9.6 below.

Recommendation Item Adam Ben Craig Darren Ed Group Total

Apartheid Museum 3.40 3.17 3.37 3.10 3.77 3.10

Botanical Gardens 3.23 2.98 2.84 2.73 3.35 2.73

Brightwater Commons 3.30 3.08 2.82 2.87 3.30 2.82

Church Square 2.73 2.62 2.48 2.47 2.93 2.47

Freedom Park 3.11 2.86 2.59 2.57 2.97 2.57

Gold Reef City 3.14 3.24 2.65 2.70 3.57 2.65

Page 167: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

152

Johannesburg Zoo 3.32 2.86 2.68 2.55 3.17 2.55

Nelson Mandela Square 2.64 2.40 2.12 1.96 2.37 1.96

Orlando Towers 2.60 2.87 2.36 2.74 3.19 2.36

Planetarium 3.35 2.69 2.93 2.47 2.93 2.47

Union Buildings 3.05 2.51 3.15 2.72 3.18 2.51

Voortrekker Monument 3.14 2.85 2.66 2.54 2.91 2.54

Table 9.6: General aggregation models: Least misery model

The returned recommendation list for the group is ordered and the top four recommendations are as

follows: The Apartheid Museum, Brightwater Commons, the Botanical Gardens, and Gold Reef City. It

can be noted that the results of this list is different to the previous.

9.3.6 Most pleasure model

The most pleasurable aggregation model is the opposite of the least misery model. Instead of

returning the lowest rating of a particular member for a particular item, the highest rating is returned

(Bourke et al., 2011; Cantador & Castells, 2012; Gartrell et al., 2010; Masthoff, 2011; Salamó et al.,

2012). This aggregation model is shown, with application to the scenario, in Table 9.7.

Recommendation Item Adam Ben Craig Darren Ed Group Total

Apartheid Museum 3.40 3.17 3.37 3.10 3.77 3.77

Botanical Gardens 3.23 2.98 2.84 2.73 3.35 3.35

Brightwater Commons 3.30 3.08 2.82 2.87 3.30 3.30

Church Square 2.73 2.62 2.48 2.47 2.93 2.93

Freedom Park 3.11 2.86 2.59 2.57 2.97 3.11

Gold Reef City 3.14 3.24 2.65 2.70 3.57 3.57

Johannesburg Zoo 3.32 2.86 2.68 2.55 3.17 3.32

Nelson Mandela Square 2.64 2.40 2.12 1.96 2.37 2.64

Orlando Towers 2.60 2.87 2.36 2.74 3.19 3.19

Planetarium 3.35 2.69 2.93 2.47 2.93 3.35

Union Buildings 3.05 2.51 3.15 2.72 3.18 3.18

Voortrekker Monument 3.14 2.85 2.66 2.54 2.91 3.14

Table 9.7: General aggregation models: Most pleasure model

The returned recommendation list for the group is ordered and the top four recommendations are as

follows: The Apartheid Museum, Gold Reef City, the Botanical Gardens, and the Planetarium. It can

be noted that the results of this list is different to the previous.

Page 168: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

153

9.3.7 Fairness model

The fairness aggregation model is a more complicated aggregation model than the previously

described models (Cantador & Castells, 2012; Masthoff, 2011). The rationale behind the aggregation

model is that it is more bearable to experience or interact with a recommendation item that one does

not like, as long as one is able to visit a recommendation item that one does like (Cantador &

Castells, 2012).

Therefore, this model allows each user to have a turn to choose their top-N highest rated

recommendations (Cantador & Castells, 2012; Masthoff, 2011). The first user makes a choice and

consecutively the others do the same until they all have each had a turn (Cantador & Castells, 2012).

After the first round of choices is completed, a second round begins with the person who chose last in

the first round (Cantador & Castells, 2012). This occurs until all the recommendation items have been

processed. An important note on this aggregation model is that if a user has a number of

recommendations with the same rating in their top-N list, their chosen recommendation must bring the

highest average and least misery to the group (Cantador & Castells, 2012).

With consideration to the scenario, this model is illustrated in greater detail. Assume that the order of

recommendation selection is Adam, Ben, Craig, Darren, and Ed. Additionally assume that each user

chooses their top-N recommendations.

Adam goes first. His top-3 recommendations are the Apartheid Museum, the Planetarium,

and the Johannesburg Zoo. His first choice is the Apartheid Museum.

Ben is next. His top-3 recommendations are Gold Reef City, the Apartheid Museum, and then

Brightwater Commons. His first choice is Gold Reef City.

The third person chosen is Craig. His top-3 choices are the Apartheid Museum, the Union

Buildings, and the Planetarium. In his case, the Apartheid Museum is already chosen by

Adam. Therefore, his next choice is selected, namely the Union Buildings.

Darren is the next person to take a turn. His top-3 choices are the Apartheid Museum,

Brightwater Commons, and the Botanical Gardens. As with Craig, because the Apartheid

Museum is already selected, Brightwater commons is selected as Darren’s choice.

Since the top-4 recommendations are now generated for the group, Ed is not considered. However, if

this scenario were extended to include more recommendations, he would be the next group member

to determine a group recommendation and have the opportunity to take the first turn, since he was

last in the previous round.

Therefore, after the application of the fairness model, the order of the recommendation list is as

follows: the Apartheid Museum, Gold Reef City, the Union Buildings, and Brightwater Commons. It

can be noted that the results of this list is different to the previous.

Page 169: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

154

9.3.8 Plurality voting model

In the plurality voting aggregation model, group members cast votes for each of their highest rated

individual recommendations (Carvalho & Madedo, 2013; Masthoff, 2004). The recommendations that

receive a majority vote at any one point are those that are selected (Carvalho & Madedo, 2013;

Masthoff, 2004). Should there be a case where there is a tie in the number of votes, both

recommendations are selected (Masthoff, 2004).

The implementation of this aggregation model is best detailed with reference to the scenario. For the

first round of voting, each user selects their highest rated recommendation. For Adam, Craig, Darren,

and Ed this is the Apartheid Museum; for Ben, this is Gold Reef City. Therefore, after the first round,

the Apartheid Museum is selected with a majority of four votes.

In the next round, the next best recommendation is proposed for those group members whose highest

rated recommendation has already been selected. The highest rated recommendation is still

considered for those group members whose highest rated recommendation was not considered in the

first round. Thereafter, a similar process is followed as in the first round. The process continues till

each tourist recommendation has been voted for. The complete process can be seen with reference

to Table 9.8 below.

Group Member

First Round Second Round Third Round Fourth Round

Adam Apartheid Museum

Planetarium Planetarium Planetarium

Ben Gold Reef City Gold Reef City Brightwater Commons

Botanical Gardens

Craig Apartheid Museum

Union Buildings Union Buildings Union Buildings

Darren Apartheid Museum

Brightwater Commons

Brightwater Commons

Botanical Gardens

Ed Apartheid Museum

Gold Reef City Botanical Gardens Botanical Gardens

Group Total Apartheid Museum

Gold Reef City Brightwater Commons

Botanical Gardens

Table 9.8: General aggregation models: Most pleasure model

The final returned recommendation list for the group is as follows: the Apartheid Museum, Gold Reef

City, Brightwater Commons, and the Botanical Gardens. It can be noted that the results of this list is

different to the previous lists.

Page 170: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

155

9.3.9 Approval voting model

In the approval voting aggregation model, a threshold is defined (Bourke et al., 2011; Cantador &

Castells, 2012; Masthoff, 2011). Each item that has a rating above the predefined threshold, receives

a vote (Bourke et al., 2011; Cantador & Castells, 2012; Masthoff, 2011). The item that has the

greatest number of votes is considered as the top recommendation (Cantador & Castells, 2012). For

a threshold of 2.8, with application to the scenario, the result is as per Table 9.9 below.

Recommendation Item Adam Ben Craig Darren Ed Group Total

Apartheid Museum 3.40 3.17 3.37 3.10 3.77 5.00

Botanical Gardens 3.23 2.98 2.84 2.73 3.35 4.00

Brightwater Commons 3.30 3.08 2.82 2.87 3.30 5.00

Church Square 2.73 2.62 2.48 2.47 2.93 1.00

Freedom Park 3.11 2.86 2.59 2.57 2.97 3.00

Gold Reef City 3.14 3.24 2.65 2.70 3.57 3.00

Johannesburg Zoo 3.32 2.86 2.68 2.55 3.17 3.00

Nelson Mandela Square 2.64 2.40 2.12 1.96 2.37 0.00

Orlando Towers 2.60 2.87 2.36 2.74 3.19 2.00

Planetarium 3.35 2.69 2.93 2.47 2.93 3.00

Union Buildings 3.05 2.51 3.15 2.72 3.18 3.00

Voortrekker Monument 3.14 2.85 2.66 2.54 2.91 3.00

Table 9.9: General aggregation models: Approval rating model

The returned recommendation list for the group is ordered and the top-4 recommendations are as

follows: the Apartheid Museum, Brightwater Commons, the Botanical Gardens, and Freedom Park.

Again, it can be noted that the results of this list is different to the previous lists.

9.3.10 Borda count model

In the Borda count aggregation model, each group member’s ratings are ordered from the lowest

rating value to the highest rating value (Bourke et al., 2011; Cantador & Castells, 2012; Masthoff,

2011; Salamó et al., 2012). Thereafter, each item is given a score that starts ate zero and goes up

(Bourke et al., 2011; Cantador & Castells, 2012; Masthoff, 2011). Therefore, the lowest rated

recommendation item receives a score of zero, the second lowest receives a rating of one, and so

forth (Bourke et al., 2011; Cantador & Castells, 2012; Masthoff, 2011; Salamó et al., 2012). If there

are multiple items with a particular rating, then the score for those items are averaged (Cantador &

Castells, 2012; Salamó et al., 2012). For example, if the second last item has a rating of two and the

third last item has a rating of two, then the score is calculated as (2 + 3) / 2 = 2.

Page 171: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

156

Thereafter, the additive utilitarian model is used to sum up each rating score per recommendation

item (Cantador & Castells, 2012). The returned recommendations are ordered from the lowest score

to the highest score (Cantador & Castells, 2012). The application of this aggregation model is shown,

with reference to the scenario in Table 9.10.

Recommendation Item Adam Ben Craig Darren Ed Group Total

Apartheid Museum 11.00 10.00 11.00 11.00 11.00 54.00

Botanical Gardens 7.00 8.00 8.00 8.00 9.00 40.00

Brightwater Commons 8.00 9.00 7.00 10.00 8.00 42.00

Church Square 2.00 2.00 2.00 2.00 2.00 10.00

Freedom Park 4.00 6.00 3.00 5.00 4.00 22.00

Gold Reef City 5.00 11.00 4.00 6.00 10.00 36.00

Johannesburg Zoo 9.00 5.00 6.00 4.00 5.00 29.00

Nelson Mandela Square 1.00 0.00 0.00 0.00 0.00 1.00

Orlando Towers 0.00 7.00 1.00 9.00 7.00 24.00

Planetarium 10.00 3.00 9.00 1.00 3.00 26.00

Union Buildings 3.00 1.00 10.00 7.00 6.00 27.00

Voortrekker Monument 6.00 4.00 5.00 3.00 1.00 19.00

Table 9.10: General aggregation models: Borda count model

The returned recommendation list for the group would, therefore, be ordered and output as follows:

the Apartheid Museum, Brightwater Commons, the Botanical Gardens, and Gold Reef City. Again, it

can be noted that the results of this list is different to the previous lists.

9.3.11 Copeland rule model

The Copeland rule aggregation model is a voting model whereby a list of recommendations is ordered

by the difference between how many times a particular recommendation item outvotes other items

and is outvoted by other items (Cantador & Castells, 2012; Masthoff, 2004). The final group

recommendation list is ordered from the recommendation that outvotes the other recommendations

the most to the least (Cantador & Castells, 2012; Masthoff, 2004). As with the previous aggregation

models that are slightly more complex, this model is best explained with reference to the scenario.

Consider the Apartheid Museum recommendation first. Because it cannot be compared to itself, this

is assigned a value of 0. The next recommendation for it to be compared against is the Botanical

Gardens. For each group member, the relevant rating beats the corresponding rating for the

Apartheid Museum. For example, Adam’s rating for the Apartheid Museum is 3.40, whilst for the

Botanical Gardens it is 3.23. Because 3.40 is greater than the Botanical Gardens rating of 3.23, the

Apartheid Museum rating gets one vote. Next, Ben’s rating is considered for these two tourist

attractions. His rating for the Apartheid Museum is 3.17, whereas it is 2.98 for the Botanical Gardens.

Page 172: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

157

Therefore, the Apartheid Museum gets another vote. Upon going through Craig, Darren, and Ed’s

comparative ratings, the final score ends with five votes for the Apartheid Museum and 0 for the

Botanical Gardens. As a result, a -1 score is attributed to the Botanical Gardens column so as to

indicate that it lost when compared to the Apartheid Museum recommendation. A similar process is

completed whereby every other tourist attraction is compared against the Apartheid Museum. Once

this has been done, the next tourist attraction is selected. In this case, it is the Botanical Gardens.

Now, the Botanical Gardens recommendation is compared with every other recommendation. This

process is followed until every recommendation has been compared to every other recommendation.

Eventually, after this process completes, a rating matrix, such as the following in Table 9.11, is

presented to the group.

AM BG BC CS FP GRC JZ NMS OT P UB VM

AM 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1

BG 1 0 1 -1 -1 -1 -1 -1 -1 -1 -1 -1

BC 1 -1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1

CS 1 1 1 0 1 1 1 -1 1 1 1 1

FP 1 1 1 -1 0 1 1 -1 1 -1 1 -1

GRC 1 1 1 -1 -1 0 -1 -1 -1 -1 -1 -1

JZ 1 1 1 -1 -1 1 0 -1 1 -1 1 -1

NMS 1 1 1 1 1 1 1 0 1 1 1 1

OT 1 1 1 -1 -1 1 -1 -1 0 -1 -1 -1

P 1 1 1 -1 1 1 1 -1 1 0 1 -1

UB 1 1 1 -1 -1 1 -1 -1 1 1 0 -1

VM 1 1 1 -1 1 1 1 -1 1 1 1 0

Group Total 11 7 9 -9 -3 5 -1 -11 3 -3 1 -7

* AM = Apartheid Museum, BG = Botanical Gardens, BC = Brightwater Commons, CS = Church

Square, GRC = Gold Reef City, JZ = Johannesburg Zoo, NMS = Nelson Mandela Square, OT –

Orlando Towers, P = Planetarium, UB = Union Buildings, VM = Voortrekker Monument

Table 9.11: General aggregation models: Copeland rule model

The returned recommendation list for the group is ordered and the top four recommendations are as

follows: the Apartheid Museum, Brightwater Commons, the Botanical Gardens, and Gold Reef City.

9.3.12 Summary

Here, a brief summary of the aggregation methods discussed is given in Table 9.12 below.

Page 173: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

158

Aggregation model Description Scenario results

Additive utilitarian model Individual rating scores are summed together

AM, BC, GRC, BG

Multiplicative utilitarian model

Individual rating scores are multiplied together

AM, BC, GRC, BG

Average model Individual rating scores are averaged together

AM, BC, GRC, BG

Average without misery model

Individual rating scores above a predefined threshold are averaged together

AM, BC, GRC, BG

Least misery model The lowest individual rated items are considered

AM, BC, BG, GRC

Most pleasure model The highest individual rated items are considered

AM, GRC, BG, P

Fairness model Allows each group member to have a turn in selecting their highest rated recommendation item

AM,GRC,UB,BC

Plurality voting model Group members cast votes for recommendation items with the highest voted items chosen.

AM,GRC,BC,BG

Approval voting model Each recommendation item with a rating score above a predefined threshold, receives a vote. The highest voted items are chosen.

AM,BC,BG,FP

Borda count model Orders each group members recommendation item rating score and assigns lowest a value of 0, next a value of 1, till all items are counted. Scores are added up thereafter with the highest scored items chosen.

AM,BC,BG,GRC

Copeland rule model Voting model whereby recommendation items are ordered by the difference between how many times a particular recommendation item outvotes other items, and is outvoted by other items. The highest scored items are chosen.

AM,BC,BG,GRC

* AM = Apartheid Museum, BC = Brightwater Commons, BG = Botanical Gardens, GRC = Gold Reef

City, P = Planetarium, UB = Union Buildings

Table 9.12: General aggregation models: Summary of results

At this point, the question arises as to which aggregation model is the most relevant as well as the

most applicable for the proposed group recommender model. This is the consequent topic of

discussion in the next section.

9.4 Evaluation of aggregation models

The purpose of this section is to determine a suitable aggregation model for implementation in the

proposed group recommender model .This evaluation is done by reviewing the research findings of

Page 174: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

159

other authors who have, themselves, evaluated a number or all of the aggregation models presented

in the previous section. Upon identifying the contributions of this research, an aggregation model for

this research is motivated.

9.4.1 Evaluation results from Masthoff (2011)

In Masthoff’s (2011) paper, many of the above listed aggregation functions are considered and

reviewed with application to a TV recommender system. This recommender system recommends a

sequence of television programmes to a group of viewers based on the group’s preferences

(Masthoff, 2011). In analysing not only the effectiveness of these aggregation models, but also how

each one best represents true human group interactions, two experiments were conducted (Masthoff,

2011).

Each group member was given a list of item ratings from every other group member

(Masthoff, 2011). Each group member was then asked to determine which seven television

programs the group should watch. Upon reviewing each group member’s list, a number of

outcomes were observed.

o Fairness and the prevention of misery are important considerations of the group

(Cantador & Castells, 2012; Masthoff, 2004; Masthoff, 2011).

o Candidate aggregation models in line with this observation are Average, Least

Misery, as well as Average without Least Misery (Baltrunas et al., 2010; Cantador &

Castells, 2012; Garcia et al., 2012; Gartrell et al., 2010; Masthoff, 2004; Masthoff,

2011).

Each group member’s individual preferences were aggregated using each of the above

aggregation methodologies. Thereafter, each group member was presented with the results

and asked to identify which set of results they thought would bring most satisfaction to the

group as a whole (Masthoff, 2011).

o The aggregation models which featured strongly were the multiplicative utilitarian

model, Borda Count, average, average without misery, and most pleasure (Masthoff,

2011).

The conclusion of these results is that aggregation models which promote fairness, satisfy the entire

group, and prevent misery are the models deemed most important to the group (Cantador & Castells,

2012; Masthoff, 2004; Masthoff, 2011). As a result, aggregation models such as Average, Average

without Misery, and Least Misery are deemed to be plausible aggregation models which achieve

these objectives (Baltrunas et al., 2010; Cantador & Castells, 2012; Garcia et al., 2012; Gartrell et al.,

2010; Masthoff, 2004; Masthoff, 2011). However, it is important to note that there was no dominant

aggregation model identified in the evaluation (Masthoff, 2004).

Page 175: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

160

9.4.2 Evaluation results from Baltrunas et al. (2010)

In the implementation of a group recommender system with the application of rank aggregation,

Baltrunas et al. (2010) selected a number of aggregation models as a means of evaluation. The

relevant aggregation models evaluated were Borda Count, Least Misery, Average, and the Spearman

Footrule (Baltrunas et al., 2010).

Upon the conclusion of the evaluation, it was noted that when different aggregation models were

implemented, that there was no single dominant model for every evaluation (Baltrunas et al., 2010).

Often, the aggregation model which performed best was related to the size of the group as well as the

degree of similarity between the members of the group (Baltrunas et al., 2010).

9.4.3 Evaluation results from Gartrell et al. (2010)

The group recommender system developed by Gartrell et al. (2010) implemented a dynamic

aggregation model based upon the notion that there is no dominant aggregation model for every

group context. As a result, Gartrell et al. (2010) attempt to cater for this uncertainty by evaluating the

social makeup of the group and then applying a relevant aggregation model based upon it.

Therefore, in the evaluation conducted, a number of observations were made.

In groups where people know each other very well, the most pleasure model was found to be

most suitable (Gartrell et al., 2010).

In groups containing people who do not know each other very well, the least misery model

was found to most suitable (Gartrell et al., 2010).

If the social makeup of the strength is deemed to be between a predefined minimum and

maximum threshold, then the average model was found to most suitable (Gartrell et al.,

2010).

9.4.4 Motivating an aggregation model

From the evaluations and consequent observations conducted by various authors in comparing

aggregation models with one another, there are a number of common results.

1. There is no single aggregation model which works best in every possible scenario

(Baltrunas et al., 2010; Gartrell et al., 2010; Masthoff, 2004). This would seem to indicate

that an evaluation, specific to the context of the group recommender system would have to be

done in order to determine a suitable aggregation model.

2. The importance of considering fairness as well as the desire to ensure that no group

member is completely dissatisfied (Cantador & Castells, 2012; Masthoff, 2004;

Masthoff, 2011). Any nominated aggregation model should promote both of these values.

Page 176: PerTrust: leveraging personality and trust for group recommendations

Chapter 9 Group recommendation: aggregation

161

In conclusion, therefore, this research identifies that there is no suitable aggregation model which

works for every scenario. However, as per Gatrell et al.’s (2010) approach, a dynamic approach to the

application of a relevant aggregation model could be used based on the relationship between group

members. Additionally, while based on the findings of Masthoff (2004), that group members

themselves, deem the average, least misery, as well as average without misery models as the most

likely aggregation methods to be adopted, the author would need to verify the performance of each

aggregation algorithm. As a result, this is the basis for evaluation in further chapters.

9.5 Conclusion

In this chapter, the topic of group aggregation was discussed. The main focus of the chapter was the

introduction, application, and evaluation of various aggregation models. The intended result of the

chapter was identifying a relevant aggregation model for the proposed group recommender system

model. The eventual conclusion, however, is that there is no dominant aggregation model and that a

suitable aggregation model is to be evaluated in a further chapter.

In the next chapter on group recommendation, the topic of satisfaction is introduced. The focus of this

chapter is on how satisfaction can be determined and measured, both from the perspective of the

individual group member, as well as for the group as a whole. The end result of this chapter,

therefore, is a nominated individual and group satisfaction algorithm to be implemented in the

proposed group recommender model.

Page 177: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

Page 178: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

163

10.1 Introduction

In previous chapters, numerous group recommendation sub-processes for this research were

presented. These sub-processes included the definition of the preference elicitation process, the

implementation of personality and trust in group recommendation, as well as the aggregation process.

This chapter discusses the last sub-process of group recommendation for the proposed group

recommender model, namely satisfaction.

In terms of group recommendation, satisfaction is important as it indicates how content the group and

each individual group member is with the group recommendations returned by the system. While it is

impossible to ensure that everyone in the group is completely satisfied all the time, a group

recommender system has to be able to monitor the satisfaction levels of the group and ensure that

group members do not become too dissatisfied (Masthoff, 2011; Quijano-Sanchez et al., 2013).

Therefore, the purpose of this chapter is to nominate a candidate algorithm for measuring individual

and group satisfaction in the proposed group recommender model. In order to achieve this, the

chapter is set out as follows. The first section details how to measure the satisfaction levels of

individual group members. This principle is then extended in the second section regarding groups and

how satisfaction for an entire group can be determined. In the third section, individual and group

satisfaction is illustrated with reference to a scenario. In the last section, the chapter concludes.

10.2 Individual satisfaction

The purpose of individual satisfaction is to determine the satisfaction levels of each individual group

member. There are a number of methodologies and potential algorithms which can be used to do this.

This section formally presents four with the intention of nominating a methodology for measuring

individual satisfaction in the proposed group recommender model.

10.2.1 Expected search length (ESL) measure (Quijano-Sanchez et al., 2013)

In the implementation of their group recommender system, Quijano-Sanchez et al. (2013) measure

individual satisfaction by determining the expected search length (ESL) of each group member. This

measure does a comparison between two lists. The first list contains a list of personalised

recommendation items calculated for the group member. The second list comprises of the final group

recommendation (Quijano-Sanchez et al., 2013).

The ESL measure determines a satisfaction rating based on how high up the calculated list of group

recommendations appear in the group member’s personal list of recommendation items. The higher

up on the personal list the group recommendations are, the more satisfied a system user is calculated

to be. However, the converse is also true. For example, if a top-3 generated movie list matches the

Page 179: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

164

first 3 positions of the group member’s personal list, then the ESL measure considers this group

member to be totally satisfied. If the top-3 generated movie list is within the top-4 recommendations

for that group member, the ESL measure reflects a highly satisfied group member. This can continue

to a specific point whereby the ESL measure determines a group member to be wholly unsatisfied

(Quijano-Sanchez et al., 2013).

Formally, the satisfaction measurement, as specified by the ESL measure, can be defined as per

Formula 10.1 below.

In Formula 10.1, a number of threshold values have been defined. The first set of threshold values

are the satisfaction scores, presented on a scale from 1.0 to 0.0 with 1.0 indicating total satisfaction

and 0.0 indicated no satisfaction. The second set of threshold values are the numbers 3 to 12. These

numbers reflect how high in the list the group recommendations need to appear in a group member’s

personal list for a relevant satisfaction level to occur. For example, if the first three recommendations

of a group member’s personal list match the group recommendation list, then a system determines

the group member to be fully satisfied with the group recommendation. Consequently, a satisfaction

rating of 1.0 is issued. However, if the group recommendations were only identified by the tenth

position in a system user’s personal list, then a satisfaction rating of 0.4 would be returned.

10.2.2 Satisfaction measure by Carvalho and Macedo (2013)

A simpler and perhaps more intuitive method of measuring satisfaction is adopted by Carvalho and

Macedo (2013). In this satisfaction algorithm, the individual ratings of the group member for the

generated set of group recommendation items are averaged across each recommendation item. The

higher the average, the more satisfied a user is determined to be (Carvalho & Macedo, 2013).

More formally, therefore, this is presented as per Formula 10.2 below.

Formula 10.1: ESL satisfaction measure (Quijano-Sanchez et al., 2013)

𝑠𝑎𝑡 𝑢,𝐺 =

1.0 𝑖𝑓 𝐸𝑆𝐿 𝐺𝑅𝐿, 𝐼𝑈𝑅𝐿 = 3;

0.9 𝑖𝑓 𝐸𝑆𝐿 𝐺𝑅𝐿, 𝐼𝑈𝑅𝐿 ≤ 4;

0.8 𝑖𝑓 𝐸𝑆𝐿 𝐺𝑅𝐿, 𝐼𝑈𝑅𝐿 ≤ 6;

0.6 𝑖𝑓 𝐸𝑆𝐿 𝐺𝑅𝐿, 𝐼𝑈𝑅𝐿 ≤ 8;

0.4 𝑖𝑓 𝐸𝑆𝐿 𝐺𝑅𝐿, 𝐼𝑈𝑅𝐿 ≤ 10;

0.2 𝑖𝑓 𝐸𝑆𝐿 𝐺𝑅𝐿, 𝐼𝑈𝑅𝐿 ≤ 12;

0.0 𝑖𝑓 𝐸𝑆𝐿 𝐺𝑅𝐿, 𝐼𝑈𝑅𝐿 > 12;

The satisfaction, sat, for a user, u, in a group, G, can be defined as:

Where ESL is the expected search length, GRL is the group recommendation list, and IURL is the

individual user recommendation list.

Page 180: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

165

10.2.3 Mean absolute error (MAE) measure by Garcia et al. (2012)

One method of measuring satisfaction, adopted by Garcia et al. (2012) in their tourist recommender

system, is that of adapting the mean absolute error (MAE) formula. Traditionally, the MAE measure

determines the margin of error between a system predicted rating and the real user rating. The

smaller this margin, the more accurate the system rating is determined to be (Garcia et al., 2012).

Garcia et al. (2012), decided to measure satisfaction on a similar principle. The logic behind this

approach is that the smaller the deviation between the predicted, individual rating and the group

recommended rating, the more satisfied a group member is. As per the previous formula, this output

is averaged across all the recommendation items (Garcia et al., 2012).

Formally defined, this would be as per Formula 10.3 below.

10.2.4 Masthoff’s (2004) individual satisfaction function

In Masthoff’s (2004) individual satisfaction function, satisfaction is based on how highly a group

member has personally rated the returned group recommendation item results. Therefore, Masthoff’s

(2004) function goes through each group recommendation item and sums the group member’s

Formula 10.2: Satisfaction measure by Carvalho and Macedo (2013)

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 = 𝑟𝑢,𝑖𝑖 ∈ 𝐺𝑅𝐿

𝐺𝑅𝐿

The satisfaction, sat, for a user, u, with a generated group recommendation list, GRL, is defined

as:

Where 𝑟𝑢,𝑖 is the explicit or inferred rating determined by the system for user, u, for

recommendation item, i, and |GRL| is the total number of group recommendations generated

by the system.

Formula 10.3: Mean absolute error (MAE) measure by Garcia et al. (2012)

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 = 𝑟𝑢,𝑖 − 𝑟𝐺 ,𝑖 𝑖 ∈ 𝐺𝑅𝐿

𝐺𝑅𝐿

The satisfaction, sat, for a user, u, with a generated group recommendation list, GRL, is defined

as:

Where 𝑟𝑢,𝑖 is the explicit or inferred rating determined by the system for user, u, for

recommendation item, i, 𝑟𝐺 ,𝑖 is the group rating for recommendation item i, and |GRL| is the

total number of group recommendations generated by the system.

Page 181: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

166

personal ratings for these items together. Thereafter, the score is normalised by summing together

the maximum rating scores attributed by the user overall across all recommendation items (Masthoff,

2004). As a result, if the group member has rated these group recommendation items highly, then a

high satisfaction score is determined. However, the converse is also true.

Another consideration by Masthoff (2004) for this satisfaction function is that of linearity. This

consideration states that differences between ratings, such as 5.0 and 4.5, are different to ratings 3.0

and 2.5. The reason for this is that these differences represent different impacts. Therefore, in order

to cater for this, Masthoff (2004) proposes a rating valuation mapping table whereby the rating for

each recommendation items is mapped to a numeric scale representing these impact differences. An

example of such a mapping table is presented in Table 10.1 below.

Rating Valuation 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

Mapped Rating Valuation -25 -16 -9 -4 -1 1 4 9 16 25

Table 10.1: Individual satisfaction: Rating valuation mapping table

Therefore, with reference to Table 10.1, a rating of 5.0 is mapped to 25 and 4.5 to 16. This represents

a difference of 9. However, a rating of 3.0 is mapped to 1 and 2.5 mapped to -1. This represents a

difference of 0. Consequently, the former scenario’s rating difference is considered more impactful

than the latter scenario’s rating difference.

In conclusion, the formal definition for this satisfaction function is given in Formula 10.4 below.

10.2.5 Motivating an individual satisfaction function

For this research, the method chosen for the implementation of measuring individual satisfaction

follows that of Masthoff’s (2004) satisfaction function. There are a number of reasons for this.

Linearity. Masthoff’s (2004) function caters for the impact differences between rating scores

at different scales. The other satisfaction functions treat these differences equally (Masthoff,

2004).

Formula 10.4: Masthoff’s (2004) individual satisfaction function

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 = 𝑟𝑢,𝑖𝑖 ∈ 𝐺𝑅𝐿

max 𝑟𝑢,𝑖 𝑖 ∈ 𝐼𝑈𝑅𝐿

The satisfaction for a user, u, with a generated group recommendation list, GRL, is defined as:

Where IURL is the individual user recommendation list, bound by the same items as contained

within the GRL, 𝑟𝐺 ,𝑖 is the group rating for recommendation item, i, and 𝑟𝑢,𝑖 is the rating

determined by user, u, for recommendation item, i.

Page 182: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

167

Personalisation. Only Masthoff’s (2004) satisfaction function normalizes about the personal

ratings of the user instead of the total number of recommendations. By normalising over the

maximum ratings in the group member’s individual recommendation list, a more personal and

accurate reflection of individual satisfaction is had than if it were just across an average.

Intuitiveness. Masthoff’s (2004) function is an intuitive and easily understandable method of

determining satisfaction. Quijano-Sanchez et al.’s (2013) ESL measure is not as intuitive.

Consider the following scenario to illustrate this. Assume a group recommendation was

returned in which two of the group recommendations appear in a group member’s top list of

personalised preferences. However, assume the third group recommendation only appears in

the tenth position of the group member’s list. With reference to Formula 10.1, the group

member would obtain a satisfaction of 0.4. Therefore, despite the fact that a group member’s

top two preferences were matched by the group recommendation, this is disregarded and the

satisfaction is bought down by the position of the third group recommendation. Masthoff’s

(2004) method is more intuitive as it considers the personal rating score given by the group

member for each group recommendation item.

Therefore, because Masthoff’s (2004) satisfaction function is the only function that considers linearity,

because it is a personalised measure of satisfaction, and because it is understandable and intuitive, it

is motivated as the method of measuring individual satisfaction for the proposed group recommender

model. The next section discusses how group satisfaction can be measured.

10.3 Group satisfaction

Besides determining the satisfaction levels of each individual group member, a recommender system

also needs to be able to determine the satisfaction levels of the group as a whole. As a result, the

focus of this section is the selection of a methodology by which this can be achieved for the proposed

group recommender system.

10.3.1 Measuring group satisfaction

A common approach to determine the group satisfaction level is to average out the individual

satisfaction levels of each individual group member (Bourke et al., 2011; Carvalho & Macedo, 2013;

Garcia et al., 2012; Kim et al., 2010). This is as per Formula 10.5 below.

Page 183: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

168

A method which builds upon this definition and extends it is proposed by Quijano-Sanchez et al.

(2013). In this method of measuring group satisfaction, they do not only use the average individual

satisfaction measure, but additionally consider the standard deviation (Quijano-Sanchez et al., 2013).

The purpose of this is to ensure that a final group satisfaction measure entails a high average

satisfaction amongst the group as well as a low deviation (Quijano-Sanchez et al., 2013). By

combining these two together, a more comprehensive and true measure of group satisfaction is

reflected (Quijano-Sanchez et al., 2013). As result, this method of group satisfaction is adopted for the

proposed group recommender model.

The formula for this extended measure of group satisfaction is presented in Formula 10.6 below.

In previous sections, the topics of both individual as well as group satisfaction have been discussed

with relevant satisfaction functions chosen for this research. In the next section, these satisfactions

are illustrated with reference to a scenario application.

Formula 10.5: Group satisfaction - Averaging individual satisfaction

𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿 = 𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 𝑢 ∈ 𝐺

𝐺

The satisfaction, sat, for a group, G, with a generated group recommendation list, GRL, is

defined as:

Where 𝑢 is a member of group G, 𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 represents an individual group member, u’s,

satisfaction with the group recommendation, and |G | is the total number of group members.

Formula 10.6: Group satisfaction measure (Quijano-Sanchez et al., 2013)

𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿 = 𝛼. 𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 𝑢 ∈ 𝐺

𝐺 − 𝛽.

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 − 𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿 𝑢 ∈ 𝐺

𝐺

The satisfaction, sat, for a group, G, with a generated group recommendation list, GRL, is

defined as:

Where

𝑢 is a member of group G

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 represents an individual group member, u’s, satisfaction with the group

recommendation

𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿 represents the average satisfaction for the group

|G | is the total number of group members.

𝛼 and 𝛽 are weighting factors

Page 184: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

169

10.4 Example application

In this section, the scenario originally presented in Chapter 7 regarding Adam and his friends touring

Johannesburg, is continued. In the previous chapter, a number of aggregation models were applied

with reference to Adam’s group. In this chapter, it is assumed that the average aggregation model

was selected and that a list of group recommendations was determined as per Table 10.2 below.

Tourist attraction Rating

Apartheid Museum 3.36

Botanical Gardens 3.03

Brightwater Commons 3.07

Church Square 2.65

Freedom Park 2.82

Gold Reef City 3.06

Johannesburg Zoo 2.92

Nelson Mandela Square 2.30

Orlando Towers 2.75

Planetarium 2.87

Union Buildings 2.92

Voortrekker Monument 2.82

Table 10.2: Example application: Adam’s group recommendation

Therefore, the final top-3 group recommendation for Adam’s group is as per Table 10.3 below.

Tourist attraction Rating

Apartheid Museum 3.36

Brightwater Commons 3.07

Gold Reef City 3.06

Table 10.3: Example application: Adam’s top-3 group recommendation

The purpose of this section is to practically illustrate the two satisfaction functions defined. As a result,

this section contains two subsections. The first subsection is the application of individual satisfaction.

The second subsection then illustrates group satisfaction.

10.4.1 Individual satisfaction

The purpose of this subsection is to determine the individual level of satisfaction for each group

member of Adam’s group. This is initially detailed just for Adam, but the results for each individual

group member are given. Therefore, consider Adam’s personal preferences as per Table 10.4 below.

Page 185: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

170

Tourist attraction Rating

Apartheid Museum 5.00

Orlando Towers 4.50

Gold Reef City 4.00

Union Buildings 4.00

Botanical Gardens 3.60

Brightwater Commons 3.60

Church Square 3.50

Freedom Park 3.00

Voortrekker Monument 3.00

Johannesburg Zoo 2.50

Planetarium 2.50

Nelson Mandela Square 2.00

Table 10.4: Example application: Adam’s personal recommendation list

Additionally, assume that, in accordance with Masthoff’s (2004) satisfaction function definition, a

rating valuation mapping table is defined as per Table 10.5 below. As a result, each of Adam’s ratings

is mapped to an appropriate rating valuation. In the case where there is not a perfect match, such as

with a rating of 3.60, the nearest applicable rating valuation is chosen.

Rating Valuation 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

Mapped Rating Valuation -25 -16 -9 -4 -1 1 4 9 16 25

Table 10.5: Example application: Adam’s group rating valuation mapping table

The individual satisfaction for Adam, in accordance with Masthoff’s (2004) individual satisfaction

function is detailed as per Calculation 10.1 below.

Page 186: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

171

Therefore, based upon Masthoff’s (2004) individual satisfaction function, Adam has a final calculated

satisfaction measure of 0.76.

Assume that a similar process is applied for the rest of the group. This results in the following

individual satisfaction measures for each group member: Ben has a satisfaction measure of 0.78,

Craig has a satisfaction measure of 0.90, Darren has a satisfaction measure of 0.91, and Ed has a

satisfaction measure of 0.78.

10.4.2 Group satisfaction

The next satisfaction measure to apply is that of group satisfaction. In this scenario, the satisfaction

measure determined by the system for each group member of Adam’s group is leveraged to calculate

a group satisfaction measure. This process followed to do this is as per Formula 10.6.

In Formula 10.6, it was noted that there were two distinct portions of the algorithm. In the first portion,

the average of the group was calculated. In the second portion, the standard deviation was calculated

for the group. Therefore, for this scenario application, each portion is detailed separately.

The calculation for the average of the group is detailed as per Calculation 10.2 below. Note that for

the purposes of this scenario application, the weighting factors are set to 1.

Calculation 10.1: Masthoff’s (2004) individual satisfaction function

𝑠𝑎𝑡 𝐴𝑑𝑎𝑚,𝐺𝑅𝐿 = 𝑟𝐴𝑑𝑎𝑚,𝑖𝑖 ∈ 𝐺𝑅𝐿

max 𝑟𝐴𝑑𝑎𝑚,𝑖 𝑖 ∈ 𝐼𝑈𝑅𝐿

𝑠𝑎𝑡 𝐴𝑑𝑎𝑚,𝐺𝑅𝐿 = 𝑟𝐴𝑑𝑎𝑚 ,𝐴𝑀 + 𝑟𝐴𝑑𝑎𝑚,𝐵𝐶 + 𝑟𝐴𝑑𝑎𝑚 ,𝐺𝑅𝐶

max 𝑟𝑢,𝑖 𝑖 ∈ 𝐼𝑈𝑅𝐿

𝑠𝑎𝑡 𝐴𝑑𝑎𝑚,𝐺𝑅𝐿 = 5 + 3.6 + 4.0

5 + 4.5 + 4.0

𝑠𝑎𝑡 𝐴𝑑𝑎𝑚,𝐺𝑅𝐿 = 25 + 4 + 9

25 + 16 + 9

𝑠𝑎𝑡 𝐴𝑑𝑎𝑚,𝐺𝑅𝐿 = 38

50

𝑠𝑎𝑡 𝐴𝑑𝑎𝑚,𝐺𝑅𝐿 = 0.76

The satisfaction, sat, for a user, Adam, in a group, G, can be defined as:

Page 187: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

172

The calculation for the second part of the group satisfaction function is the determination of the

standard deviation between the average satisfaction for the group and the satisfaction measure of

each individual group member. This portion of the group satisfaction calculation is presented in

Calculation 10.3 below.

The final calculation for the satisfaction of Adam’s group is determined as per Calculation 10.4 below.

Calculation 10.2: Scenario application – Average satisfaction

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 𝑢 ∈ 𝐺

𝐺 =

0.76 + 0.78 + 0.90 + 0.91 + 0.78

5

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 𝑢 ∈ 𝐺

𝐺 =

4.13

5

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 𝑢 ∈ 𝐺

𝐺 = 0.83

The satisfaction, sat, for Adam’s group, G, with a generated group recommendation list, GRL, is

defined as:

Calculation 10.3: Scenario application – Standard deviation

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 − 𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿

𝑢 ∈ 𝐺

𝐺 =

0.76 − 0.83 + 0.78− 0.83 + 0.90− 0.83 + 0.91 − 0.83 + 0.78− 0.83

5

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 − 𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿

𝑢 ∈ 𝐺

𝐺 =

0.0049+ 0.0025+ 0.0049 + 0.0064+ 0.0025

5

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 − 𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿 𝑢 ∈ 𝐺

𝐺 = 0.00424

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 − 𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿 𝑢 ∈ 𝐺

𝐺 = 0.065

The standard deviation for Adam’s group, G, with a generated group recommendation list, GRL, is defined as:

Page 188: PerTrust: leveraging personality and trust for group recommendations

Chapter 10 Group recommendation: satisfaction

173

Therefore, the final satisfaction level for Adam’s group is 0.765.

10.5 Conclusion

In this chapter, the main focus and topic of discussion was that of satisfaction and its implementation

from both an individual and group perspective. The conclusion of this chapter was that Masthoff’s

(2004) individual satisfaction would be used to measure individual satisfaction and that Quijano-

Sanchez et al.’s (2013) group satisfaction measure would be used to measure group satisfaction in

the proposed group recommender model.

Up to this point, all of the background group recommendation processes necessary for this research

have been discussed and motivated. As a result, the next section presents Part II of this research by

formally defining the group recommender model to be implemented. Therefore, the next section

introduces PerTrust, a personality and trust-based group recommender model.

Calculation 10.4: Scenario application – Group satisfaction

𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿 = 𝛼. 𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 𝑢 ∈ 𝐺

𝐺 − 𝛽.

𝑠𝑎𝑡 𝑢,𝐺𝑅𝐿 − 𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿

𝑢 ∈ 𝐺

𝐺

𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿 = 0.83 − 0.065

𝑠𝑎𝑡 𝐺,𝐺𝑅𝐿 = 0.765

The satisfaction, sat, for Adam’s group, G, with a generated group recommendation list, GRL, is

defined as:

Page 189: PerTrust: leveraging personality and trust for group recommendations

Part II The PerTrust model

Chapters 11-15

Page 190: PerTrust: leveraging personality and trust for group recommendations

Chapter 11 Introducing PerTrust – a personality and trust-based group recommender model

Page 191: PerTrust: leveraging personality and trust for group recommendations

Chapter 11 Introducing PerTrust

176

11.1 Introduction

In the research presented to this point, the various processes and sub-processes which enable group

recommendation have been presented and discussed. The main focus of this research is to highlight

the relevance, applicability, and implementation of both personality and trust in the group

recommendation process. Briefly, the main observations of the research so far are presented below.

Trust. Chapters 4, 5, and 6, detailed a trust implementation framework for this research. The

purpose of this trust implementation framework was to detail and nominate a trust calculation

and trust-based recommendation algorithm for the proposed group recommender model.

After an assessment of a number of trust-based algorithms, with reference to a defined set of

requirements as well as an empirical evaluation of these algorithms, it was concluded that the

MoleTrust algorithm would be used as the trust calculation algorithm and that the

EnsembleTrustCF algorithm would be used as the trust-based recommendation algorithm.

Preference elicitation. Chapter 7 detailed the preference elicitation process of group

recommendation for the proposed group recommender model. The chapter illustrated how

both similarity and trust would be used to determine a set of top-N recommendations for each

user.

Rating prediction. Chapter 8 detailed how trust and personality could be implemented as

part of the group recommendation process for the proposed group recommender model.

Specifically, a personality framework was defined through a motivation of the approach

adopted by Quijano-Sanchez et al. (2013). In this approach, the Thomas-Kilman personality

test is used to derive a personality type with a numeric representation of this personality type

also defined: the conflict mode weight (CMW) value. Thereafter, the selection of the

delegation-based rating prediction algorithm was motivated for this research through an

empirical evaluation. This algorithm, created by Quijano-Sanchez et al. (2013), implements

both trust and personality in determining a predicted rating for a recommendation item.

Aggregation. Chapter 9 detailed the aggregation process of group recommendation for the

proposed group recommender model. In this chapter, a number of aggregation models were

discussed and illustrated. The conclusion of the chapter was that an evaluation would have to

be done to determine a suitable aggregation algorithm for this research. This evaluation is to

be done in the next chapter.

Satisfaction. Chapter 10 detailed the measurement of satisfaction for both individuals and

groups for the proposed group recommender model. The conclusion of this chapter was that

Masthoff’s (2004) individual satisfaction would be used to calculate individual satisfaction and

that Quijano-Sanchez et al.’s (2013) group satisfaction measurement would be used to

determine group satisfaction.

Part two of this research now sets out to design an online group recommender system which

implements both personality and trust. The model presented is the PerTrust model. The name of this

model is derived from the implementation of personality and trust as the basis for group

Page 192: PerTrust: leveraging personality and trust for group recommendations

Chapter 11 Introducing PerTrust

177

recommendation. The next chapters detail the PerTrust model. This begins in the current chapter

where a high level overview of the components in the PerTrust model is given. Thereafter, the

following chapter provides a formal definition of the PerTrust model by detailing each component

within the model. Next, the PerTrust model is evaluated to determine the model’s viability for group

recommendation. The PerTrust model definition then concludes by presenting a prototype

implementation of the model.

For the purposes of a high level overview of the PerTrust model, this chapter is structured as follows.

In the first section, the PerTrust architecture is introduced. The second section then details the

components within the PerTrust architecture. The third section concludes the chapter.

11.2 PerTrust architecture

The purpose of the PerTrust model is to provide a group recommendation implementation which

considers the personalities of each group member as well as the interpersonal trust relationships

between group members. The consequent architecture of the PerTrust model is presented in Figure

11.1 below.

Database

Group recommendation

Web services layer

Aggregation component

Satisfaction

Aggregation strategy

Personality and trust influence

Rating matrix formation

Preference elicitation component

Top-N recommendation

Recommendation retrieval

Similar and trusted user identification

Registered user retrieval

Registration component

Rating history information

Social relations information

Personality information

Basic information

Database

Client

Web browser

Group

Group

administrator

Mobile

device

Figure 11.1: PerTrust architecture for group recommendation

As per Figure 11.1, the PerTrust architecture for group recommendation is defined by four main

components.

Group component. This component manages the formation and management of the group.

Page 193: PerTrust: leveraging personality and trust for group recommendations

Chapter 11 Introducing PerTrust

178

Client component. This component is a web browser which acts as an interface between the

group component and the group recommendation component.

Group recommendation component. This component implements and executes the

process of group recommendation. The eventual output of this component is the group

recommendation.

Database component. This component acts as the storage component for all relevant group

data, user data, social data, trust data, historical data, and recommendation item data.

Within the group recommendation process, there is much interaction as well as a dependency

between the group recommendation component and the database component.

There are a number of subcomponents contained within the group recommendation component,

namely the registration component, the preference elicitation component, and the aggregation

component. Each component representation in Figure 11.1 contains a number of subcomponents.

Additionally, the figure shows how the components are managed and called by the web services

layer. Therefore, the purpose of presenting the PerTrust architecture is to not only present the major

components, but to also give an indication as to how they flow and connect to each other.

Now that each of the components comprising the PerTrust architecture has been presented at a high

level, a more detailed analysis of each individual component can begin. This discussion occurs in the

next section.

11.3 PerTrust system components

In this section, each of the four main PerTrust system components are examined in greater detail,

namely the group, client, and group recommendation components with specific emphasis on the

registration component, the preference elicitation component and the aggregation component. In the

last subsection, the database component is explained.

11.3.1 The group component

The group component in the PerTrust architecture revolves around two key aspects.

Group formation. In the PerTrust architecture, groups are formed explicitly. In this process, a

group administrator explicitly adds one or more registered users to form a group. This final

result of this process is a list of group members.

Group administrator. In the PerTrust architecture, the group administrator manages the

process of group recommendation. It is their responsibility to explicitly form a group, initiate

the group recommendation process, and relay the calculated group recommendation back to

the group.

Page 194: PerTrust: leveraging personality and trust for group recommendations

Chapter 11 Introducing PerTrust

179

11.3.2 The client component

The client component in the PerTrust architecture is a web browser. This is because the PerTrust

system is an online group recommender system which is device and platform independent. This

ensures that the PerTrust system can be viewed and used with multiple different devices such as

laptops, tablets, and smart phones.

Because the client component is a thin client, most of the group recommendation processes are

calculated and determined within the group recommendation component. This means that the

browser acts as the presentation layer between the group component and the group recommendation

component. Therefore, once the group recommendation component has completed the group

recommendation, the group recommendation results are returned to the web browser. The web

browser then parses and formats the results and presents them to the group administrator and other

group members in a user friendly format.

11.3.3 The group recommendation component

The group recommendation component in the PerTrust architecture is where the majority of the group

recommendation processes discussed in previous chapters are executed. The user registration,

preference elicitation, and aggregation group recommendation processes occur in this component.

From a PerTrust architecture perspective, each of these processes is presented as individual

subcomponents. This ensures that there is a separation of concerns and loose coupling between

them. Additionally, it is noted that each of these web services is managed by the web services layer.

The purpose of this layer is to manage the flow of the group recommendation process and determine

when the relevant process and consequent components should be called.

This section is divided into three separate subsections. Each of these subsections detail the individual

subcomponents identified within the group recommendation component: the registration component,

the preference elicitation component and finally the aggregation component.

a) The registration component

The registration component obtains explicit information from a new system user so that their

individual preferences can be considered when determining a group recommendation. The

information obtained from a system user is divided into four main categories: basic

information, personality information, social relations information, and rating history

information. The capturing of each information type results in the formation of a complete

system user profile.

Page 195: PerTrust: leveraging personality and trust for group recommendations

Chapter 11 Introducing PerTrust

180

There are four corresponding registration components to manage basic information,

personality information, social relations information and rating history information. Each of

these components is briefly detailed in the sections following.

Basic information component. The purpose of the basic information component is

to manage the capturing, validation, and processing of basic information for a new

system user. Such basic information includes a system user’s first name, last name,

username, and password.

Personality information component. The purpose of the personality information

component is to manage the capturing and processing of personality information, via

the Thomas-Kilman Instrument (TKI) personality test, for a new system user.

Social relations information component. The purpose of the social relations

information component is to manage the provisioning, capturing, and processing of a

new system user’s social relations information. This social relations information

comprises of the explicit trust statements attributed by the system user to other

system users.

Rating history information component. The purpose of the rating history

information component is to manage the capturing and processing of a new system

user’s rating history information. This rating history contains all of the explicit rating

scores attributed by the system user to multiple recommendation items.

b) The preference elicitation component

The preference elicitation component determines a top-N list of individual and personalised

recommendations for each member of the group using four components.

Registered user retrieval component. The registered user retrieval component

provides a group administrator with a list of registered users so that they can explicitly

select which registered users will form a part of their group.

Similar and trusted user identification component. The similar and trusted user

identification component determines those system users who have been explicitly or

implicitly trusted as well as those system users who have been calculated to be

similar. This list of users is thereafter passed on to the recommendation retrieval

component so that a list of recommendations can be determined for each group

member.

Recommendation retrieval component. The recommendation retrieval component

retrieves a list of personalised recommendation items for each group member based

upon the list of similar and trusted system users identified in the similar and trusted

user identification component. Once determined, these lists of recommendation items

are then passed on to the top-N recommendation component so that a top-N list can

finally be output.

Top-N recommendation component. The top-N recommendation component

determines a personalised top-N list of recommendations for each group member.

Page 196: PerTrust: leveraging personality and trust for group recommendations

Chapter 11 Introducing PerTrust

181

This is the final component of the preference elicitation process and takes as input

the previously identified list of recommendation items determined for each group

member in the recommendation retrieval component.

c) The aggregation component

The aggregation component has two main purposes. The first purpose is to determine a final

top-N list of recommendations for the group as a whole. The second purpose of the

aggregation component is to determine the satisfaction levels of each group member as well

as the group collectively.

Within the aggregation component, there are four components detailed next.

Rating matrix formation component. The purpose of the rating matrix formation

component within the aggregation component is to form a rating matrix by making

use of the top-N recommendation lists determined for each group member in the

preference elicitation component. The final rating matrix contains a list of each

recommendation item returned for each group member with either an explicit or

calculated rating score attached. This rating matrix is thereafter passed on to the

personality and trust influence component.

Personality and trust influence component. The purpose of the personality and

trust influence component is to take the fully populated rating matrix formed in the

rating matrix formation component and to weight each rating score in the matrix with

the personality and trust factor scores of every other group member. In the PerTrust

architecture, the personality factor is the CMW value of the particular other group

member and the trust factor is the explicit or implicit trust value or the similarity value

calculated for that other group member. Once the trust and personality weighting has

been applied to each rating, the aggregation model component is called.

Aggregation model component. The purpose of the aggregation model component

is to take the personality and trust affected rating matrix formed in the previous

component and aggregate this into a single top-N list of recommendations for the

group as a whole. This top-N list of group recommendations is considered to be the

final group recommendation. Once determined, this list of top-N group

recommendations is returned to the web services layer.

Satisfaction calculation component. The purpose of the autonomous satisfaction

calculation component is to determine how satisfied both the group and the individual

group members are with the calculated top-N group recommendation. The final

calculated satisfaction scores are returned to the web services layer.

11.3.4 The database component

The database component in the PerTrust architecture has a number of purposes.

Page 197: PerTrust: leveraging personality and trust for group recommendations

Chapter 11 Introducing PerTrust

182

The database component acts as a means of persisting all of the necessary static and

dynamic data in the system. Static data would comprise of data such as the weightings for a

system user’s CMW value. Dynamic data would comprise of data such as system users and

each system user’s rating history and personality results.

The database component provides the system with a means of being queried and of

retrieving and saving information in order that the various components exposed by the group

recommendation component can be completed successfully.

11.4 Conclusion

In this chapter, the PerTrust architecture was introduced and presented with a discussion taking place

at both a high level as well as at a more specific level. With reference to the specificities of the

PerTrust architecture, the main components discussed were the group component, the client

component, the group recommendation component, and the database component.

In the next chapter, the discussion of the PerTrust architecture continues as the implementation of

this architecture is formally detailed in the PerTrust model. This chapter will detail, more precisely,

how the components within the architecture are implemented on a practical and algorithmic level.

Page 198: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

Page 199: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

184

12.1 Introduction

In the previous chapter, the architecture of the PerTrust model was formally presented and each of

the main components were analysed and discussed. In this chapter, the details of the PerTrust model

are introduced. The purpose of this chapter is to present each of the core components within the

group recommendation component of the PerTrust model and to formally define them. For the

purposes of reference, the architecture diagram presented in the previous chapter is listed below in

Figure 12.1.

Database

Group recommendation

Web services layer

Aggregation component

Satisfaction

Aggregation strategy

Personality and trust influence

Rating matrix formation

Preference elicitation component

Top-N recommendation

Recommendation retrieval

Similar and trusted user identification

Registered user retrieval

Registration component

Rating history information

Social relations information

Personality information

Basic information

Database

Client

Web browser

Group

Group

administrator

Mobile

device

Figure 12.1: PerTrust model for group recommendation

This chapter is structured as follows. The first section details the registration component. Thereafter,

the preference elicitation component is defined. Next, the aggregation component is presented.

Finally, the chapter is concluded.

12.2 Registration component

The registration component of the PerTrust architecture obtains explicit information from a new

system user so that their individual preferences can be considered when determining a group

recommendation. The information obtained from a system user is divided into four main

subcomponents.

1. Basic information component

2. Personality information component

Page 200: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

185

3. Social relations information component

4. Rating history information component

Each of these registration components are defined in this section.

12.2.1 Basic information component

With reference to the architecture, it is in this component where a new user’s basic information is

captured, validated, and processed. This basic information comprises of information such as a user’s

first name, last name, username, and password. The completion of this process in the PerTrust model

results in the creation of a new system user. Therefore, the user component is formally defined below.

User

For the purposes of this model, a user is represented by the variable, u, formally defined in Model

Definition 1 below.

In the PerTrust model, there are multiple users within the PerTrust system. In this case, multiple users

are represented in the PerTrust model by the user set, defined in Model Definition 2.

12.2.2 Personality information component

The personality information component comprises of a system user’s personality information, as

determined by the results of the Thomas-Kilman Instrument (TKI) test. This information is captured

Model Definition 1: User

Within the PerTrust model, a valid user who has been registered on the system is defined by

the variable, u.

Model Definition 2: User set

{𝑢𝑎 ,𝑢𝑏 , … ,𝑢𝑈𝑁}

Within the PerTrust model, one or more users are defined by the set of all registered users

within the system. This is represented as follows:

Variable Definition 12.2.1: Total number of users

Within the PerTrust model, the set of total users is bound by the total number of users

registered on the system. The variable used to denote the total number of users registered on

the system is UN.

Page 201: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

186

upon registration. Consequently, all relevant model definitions related to the personality information

component are defined.

Personality

In the PerTrust model, personality is limited to how a user behaves in a conflict scenario. The means

of determining a personality type within the PerTrust model follows the implementation adopted by

Quijano-Sanchez et al. (2013) and Recio-Garcıa et al. (2009). In this approach the Thomas-Kilman

Instrument (TKI) test is adopted as the means of determining one of five possible personality types for

a user: competing, collaborating, compromising, avoiding, and accommodating.

Once a user has completed the TKI test, the manner in which the test has been answered reflects a

most and least dominant personality type. This can be determined as each question within the TKI

test results in a score being incremented to one of the five main personality types. Thereafter, based

upon the total score attributed to each personality type as well as a set of predefined threshold values

for each personality type, the dominant and least dominant personality types can be calculated.

The predefined threshold values used to determine a personality type is taken from the TKI test as

these are the real life threshold values used to output the relevant personality type for a user.

Therefore, if the total score of a personality type exceeds a predefined threshold value, then this

personality type is considered to be a dominant personality type for the user. Conversely, if the total

score of a personality type is less than a predefined threshold value, then this personality type is

considered to be a least dominant personality type for the user.

The PerTrust model makes use of this same implementation in determining a dominant and least

dominant personality type for a user. This approach with the relevant threshold values is formally

defined in Model Definition 3 below.

Page 202: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

187

Model Definition 3: Conflict personality type

𝑢𝑑𝑝𝑡 = {𝑐𝑜𝑚𝑝𝑒𝑡𝑖𝑛𝑔}, 𝑖𝑓 𝑡𝑐𝑜𝑚𝑝𝑒𝑡𝑖𝑛𝑔 ≥ 8

𝑢𝑙𝑑𝑝𝑡 = {𝑐𝑜𝑚𝑝𝑒𝑡𝑖𝑛𝑔}, 𝑖𝑓 𝑡𝑐𝑜𝑚𝑝𝑒𝑡𝑖𝑛𝑔 ≤ 3

𝑢𝑑𝑝𝑡 = {𝑐𝑜𝑙𝑙𝑎𝑏𝑜𝑟𝑎𝑡𝑖𝑛𝑔}, 𝑖𝑓 𝑡𝑐𝑜𝑙𝑙𝑎𝑏𝑜𝑟𝑎𝑡𝑖𝑛𝑔 ≥ 9

𝑢𝑙𝑑𝑝𝑡 = {𝑐𝑜𝑙𝑙𝑎𝑏𝑜𝑟𝑎𝑡𝑖𝑛𝑔}, 𝑖𝑓 𝑡𝑐𝑜𝑙𝑙𝑎𝑏𝑜𝑟𝑎𝑡𝑖𝑛𝑔 ≤ 5

𝑢𝑑𝑝𝑡 = {𝑐𝑜𝑚𝑝𝑟𝑜𝑚𝑖𝑠𝑖𝑛𝑔 }, 𝑖𝑓 𝑡compromising ≥ 8

𝑢𝑙𝑑𝑝𝑡 = {𝑐𝑜𝑚𝑝𝑟𝑜𝑚𝑖𝑠𝑖𝑛𝑔 }, 𝑖𝑓 𝑡compromising ≤ 4

Within the PerTrust model, a user, u’s, dominant personality type is defined by the variable,

𝑢𝑑𝑝𝑡, whilst a user, u’s, least dominant personality type is determined by the variable, 𝑢𝑙𝑑𝑝𝑡 .

Both of these variables are defined as sets since each variable can contain more than one

personality type. The manner in which each personality type is determined to be dominant or

not, is defined in model definitions 3.1, 3.2, 3.3, 3.4, and 3.5 below.

Model Definition 3.1: Competing personality type definition

Given a total number of selections for the competing personality type, represented by the

variable, 𝑡𝑐𝑜𝑚𝑝𝑒𝑡𝑖𝑛𝑔, the competing personality type for a user, u, can be considered dominant

and least dominant if the following condition is met:

Model Definition 3.2: Collaborating personality type definition

Given a total number of selections for the collaborating personality type, represented by the

variable, 𝑡𝑐𝑜𝑙𝑙𝑎𝑏𝑜𝑟𝑎𝑡𝑖𝑛𝑔, the collaborating personality type for a user, u, can be considered

dominant and least dominant if the following condition is met:

Model Definition 3.3: Compromising personality type definition

Given a total number of selections for the compromising personality type, represented by the

variable, 𝑡compromising , the compromising personality type for a user, u, can be considered

dominant and least dominant if the following condition is met:

Page 203: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

188

12.2.3 Social relations information component

The social relations information component manages a system user’s social relations information

during the registration process. In the PerTrust model, the social relations information comprises of

the explicit trust statements attributed by the newly registered system user to other system users.

Therefore, all model definitions related to the social relations component are defined. This includes a

formal model definition for trust as well as a system user’s trust network.

Trust

In the PerTrust model, trust is a foundational component which has been extensively discussed and

analysed within this research. The formal definition put forward in this research is that trust is defined

as the commitment by a user A to a specific action, within a specific context, in the subjective belief

that the future actions undertaken by another user B will result in a good outcome, though accepts

that negative consequences may otherwise occur. This definition is presented formally for the

PerTrust model in Model Definition 4 below.

Model Definition 3: Conflict personality type (Continued)

𝑢𝑑𝑝𝑡 = {𝑎𝑣𝑜𝑖𝑑𝑖𝑛𝑔 }, 𝑖𝑓 𝑡avoiding ≥ 8

𝑢𝑙𝑑𝑝𝑡 = {𝑎𝑣𝑜𝑖𝑑𝑖𝑛𝑔 }, 𝑖𝑓 𝑡avoiding ≤ 4

𝑢𝑑𝑝𝑡 = {𝑎𝑐𝑐𝑜𝑚𝑚𝑜𝑑𝑎𝑡𝑖𝑛𝑔 }, 𝑖𝑓 𝑡accommodating ≥ 6

𝑢𝑙𝑑𝑝𝑡 = {𝑎𝑐𝑐𝑜𝑚𝑚𝑜𝑑𝑎𝑡𝑖𝑛𝑔 }, 𝑖𝑓 𝑡accommodating ≤ 3

Model Definition 3.4: Avoiding personality type definition

Given a total number of selections for the avoiding personality type, represented by the

variable, 𝑡avoiding , the avoiding personality type for a user, u, can be considered dominant and

least dominant if the following condition is met:

Model Definition 3.5: Accommodating personality type definition

Given a total number of selections for the accommodating personality type, represented by the

variable, 𝑡accommodating , the accommodating personality type for a user, u, can be considered

dominant and least dominant if the following condition is met:

Page 204: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

189

Trust network

In the PerTrust model, multiple trust valuations are expressed by a single source user towards

multiple target users. When such a scenario occurs within the model, a trust network is formed.

Formally, a trust network is defined as the structural representation of one or more trust valuations

expressed by a source user towards one or more target users. In the PerTrust model, the trust

network formed is more specifically defined as an egocentric, directed, and interval measured trust

network.

Model Definition 4: Trust

𝑡𝑎,𝑏 ∈ [0,10]

Within the PerTrust model, the trust valuation attributed by a user, 𝑢𝑎 , to a particular user, 𝑢𝑏 ,

is represented by the variable, 𝑡𝑎,𝑏 .

Variable Definition 4.1: Trust boundary definition

Within the PerTrust model, the trust valuation variable is an integer value bound between 0

and 10. A trust valuation of 0 reflects that a user, 𝑢𝑏, is not known by a user, 𝑢𝑎 . A low trust

valuation reflects that a user, 𝑢𝑏 , is trusted very little by a user, 𝑢𝑎 . A high trust valuation

reflects that a user, 𝑢𝑏 , is trusted completely by a user, 𝑢𝑎 . This is formally defined in the

PerTrust model by the following boundary definition:

Variable Definition 4.2: Explicit and implicit trust

Within the PerTrust model, it is important to note that the trust valuation definition does not

distinguish between explicit and inferred trust. Explicit trust is a trust valuation measure

explicitly attributed by a user, 𝑢𝑎, to a user, 𝑢𝑏 , based upon a relationship between them.

Conversely, an implicit trust valuation measure is one which is calculated by the PerTrust

model based upon the explicit trust valuations expressed by a user, 𝑢𝑎 . The means of deriving

implicit trust in the PerTrust model is presented in further sections.

Variable Definition 4.3: Source and target user

Within the PerTrust model, both user, 𝑢𝑎, and user, 𝑢𝑏 , have their own role definitions. In this

model, user, 𝑢𝑎 , is defined as the source user and user, 𝑢𝑏 , is defined as the target user. The

source user is the user which attributes a trust value to another user or is the user for which a

rating is being calculated by the system. The target user is the one for whom a trust valuation is

attributed to by the source user. This trust valuation could either be explicitly attributed by the

source user or implicitly determined by the PerTrust model.

Page 205: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

190

Firstly, the trust network is egocentric since it focuses on the trust valuation expressions from a single

user. Secondly, the trust network is directed since all trust valuation expressions are unidirectional in

that they all flow outward from the source user to one or more target users. Lastly, the trust network is

interval measured since the link between each source user and target user is represented by a trust

valuation ranging from zero to ten. The formal definition for a trust network within the PerTrust model

is given in Model Definition 5 below.

In the PerTrust model, there are two threshold values for the trust network which need to be

additionally defined. These two threshold values are the trust valuation threshold and the trust horizon

threshold. The purpose of these two threshold values is to ensure that the trust network for a specific

user only includes the most trusted system users and that the trust valuations for each edge within the

trust network are as high and as reliable as possible. Both of these threshold values are formally

defined in Model Definition 6 below.

Model Definition 5: Trust network

𝑇𝑁 𝑢𝑎 = 𝑉,𝐸

𝑉 = {𝑢𝑎 ,𝑢𝑏 , … , 𝑢𝑈𝑁}

𝐸 = {𝑡𝑎,𝑏 , 𝑡𝑎,𝑐 , … , 𝑡𝑎,𝑈𝑁}

The trust network formed within the PerTrust model for a source user, 𝑢𝑎 , can be formally

defined as a graph, represented by the variable, 𝑇𝑁 𝑢𝑎 , with a defined set of vertices,

represented by the variable V, as well as a defined set of edges, represented by the variable E.

This is formally defined in the equation below:

Variable Definition 5.1: Vertices

Within the PerTrust model, the set of vertices is defined as the set of all users within the trust

network. In this model, therefore, this includes the source user as well as all target users for

whom a trust valuation has been attributed to by the source user. Formally, this can be defined

as follows. For a source user, 𝑢𝑎 , and a set of target of users, {𝑢𝑏 , … , 𝑢𝑈𝑁}, the set of vertices, V,

for a trust network, 𝑇𝑁 𝑢𝑎 , is defined as:

Variable Definition 5.2: Edges

Within the PerTrust model, the set of edges, E, for a trust network, 𝑇𝑁 𝑢𝑎 , is defined as the set

of all trust valuations expressed by a source user, 𝑢𝑎 , towards a set of target users,

{𝑢𝑏 , … ,𝑢𝑈𝑁}. It is noted that each target user in this set must have been attributed a trust

valuation greater than 0. This definition is formally defined below:

Where 𝑡𝑎,𝑏 , 𝑡𝑎,𝑐 , … , 𝑡𝑎,𝑈𝑁 > 0

Page 206: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

191

12.2.4 Rating history information component

The rating history component manages a newly registered user’s rating history. This rating history

comprises of all the explicit rating scores attributed by the system user to multiple recommendation

items. Therefore, all model definitions related to the rating history component are given below.

Model Definition 6: Trust threshold definitions

𝐸 = 𝑡𝑎,𝑐 , 𝑡𝑎,𝑑 , … , 𝑡𝑎,𝑈𝑁

𝑡𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 = 5

𝑡ℎ𝑜𝑟𝑖𝑧𝑜𝑛 = 2

In the PerTrust model, given a source user, 𝑢𝑎 , and their associated trust network, 𝑇𝑁 𝑢𝑎 , two

threshold values are to be defined. The first is the trust valuation threshold, represented by the

variable, 𝑡𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 . The second threshold value is the trust horizon, which is represented by the

variable, 𝑡ℎ𝑜𝑟𝑖𝑧𝑜𝑛 . Each of these threshold values are defined in Variable Definitions 6.1 and 6.2

respectively.

Variable Definition 6.1: Trust valuation threshold definition

In the PerTrust model, the purpose of the trust valuation threshold value is to ensure that only

those users which exceed a specified trust threshold valuation are included in the trust

network for a user. Therefore, given a user, 𝑢𝑎 , all edges, E, within the trust network, 𝑇𝑁 𝑢𝑎 ,

must exceed the trust threshold value, 𝑡𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 . This is formally defined below:

Where 𝑡𝑎,𝑏 , 𝑡𝑎,𝑐 , … , 𝑡𝑎,𝑈𝑁 > 𝑡𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑

In the PerTrust model, the trust threshold is set to a value of 5. Formally, therefore:

Variable Definition 6.2: Trust horizon threshold definition

In the PerTrust model, the purpose of the trust horizon threshold value is to limit how far away

from the source user, 𝑢𝑎 , a trust valuation can be implicitly calculated. The further away from

the source user this occurs, the less quality the trust valuation is. Therefore, within the

PerTrust model, the trust horizon threshold valuation is set to a value of 2. Formally, this is

defined as per the below:

Page 207: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

192

Item

Formally, an item can be defined as an event or object which can either be experienced or engaged

with. The definition for an item within the PerTrust model is formally defined in Model Definition 7

below.

As with users, it is also possible for multiple items to occur within the PerTrust model. In such cases,

the total number of items is defined by a set as per Model Definition 8 below.

Rating

A rating is defined as a numeric valuation measure attributed by a user for an item based on their

personal experience or engagement with that specific item. Therefore, a low rating valuation of an

item by a user reflects a poor experience or engagement, whilst a high rating valuation reflects a good

experience or engagement.

The formal definition for a rating valuation within the PerTrust model is listed in Model Definition 9

below.

Model Definition 7: Item

Within the PerTrust model, a valid item is defined as an event or object a user, u, can

experience or engage with. A valid item is represented by the variable, i.

Model Definition 8: Item set

{𝑖0, 𝑖 , … , 𝑖𝐼𝑁}

Within the PerTrust model, one or more items are defined by the set of all valid items within

the recommender system, represented as follows:

Variable Definition 8.1: Total number of items

Within the PerTrust model, the set of total items is bound by the total number of items

contained in the system. The variable used to denote the total number of items registered on

the system is IN.

Page 208: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

193

In the PerTrust model, multiple ratings can occur within the system. This can occur in two possible

cases and hence, two possible rating set definitions are given. In the first case, a rating set can be

defined by a single user who has rated multiple items. In the PerTrust model, this case is defined as a

user’s rating history since it entails all the ratings submitted for a number of items by a single user.

This case is formally defined in Model Definition 10 below.

The second case in which multiple ratings can occur within the PerTrust model is whereby multiple

users submit a rating for a single item. This case is defined as an item rating set since it entails all the

ratings submitted by multiple users for a single item. This case is formally defined in Model Definition

11 below.

Model Definition 9: Rating valuation

𝑟𝑢,𝑖 ∈ [0,5]

Within the PerTrust model, the rating valuation contributed by a user, u, for an item, i, is

represented by the variable, 𝑟𝑢,𝑖 .

Variable Definition 9.1: Rating boundary definition

Within the PerTrust model, 𝑟𝑢,𝑖 , the rating valuation variable, is an integer value bound

between 0 and 5. A rating of 0 reflects that the user has not yet experienced or engaged with

the particular item. A rating of 5 indicates that a user had the best possible experience or

engagement with that item. This is formally defined in the PerTrust model by the following

boundary definition:

Model Definition 10: User rating history set

𝑢𝑟ℎ = {𝑟𝑢,𝑖0, 𝑟𝑢,𝑖1 , … , 𝑟𝑢,𝑖𝐼𝑁

}

𝑢𝑟ℎ = {}

Within the PerTrust model, the rating history for a single user, u, represented by the

variable 𝑢𝑟ℎ , is defined as the set of ratings attributed by the specific user to one or more items.

This is defined by the following set:

Variable Definition 10.1: No items rated by user

It is noted in the PerTrust model that it is possible for a user’s ratin history set to be an empty

set with no items rated. This could possibly occur for new users of the system. As a result, a

user’s ratin history is then be defined as an empty set, represented as follows:

Page 209: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

194

In the PerTrust model, another common rating consideration is that of the average of all ratings for a

specific user. This average calculation is particularly relevant when a rating prediction is being

calculated for a user. The formal definition for the rating average is given in Model Definition 12

below.

Model Definition 11: Item rating history set

𝑖𝑟ℎ = {𝑟𝑢0,𝑖, 𝑟𝑢1,𝑖

, … , 𝑟𝑢𝑈𝑁,𝑖}

𝑖𝑟ℎ = {}

Within the PerTrust model, the rating history for a single item, i, represented by the

variable 𝑖𝑟ℎ , is defined as the set of ratings attributed by one or more users to a single item.

This is defined by the following set:

Variable Definition 11.1: No ratings attributed by user

It is noted in the PerTrust model that it is possible for the item rating set to contain no

submitted ratings, such as in the case when a new item is added to the system. In such cases,

the item rating history set is defined as an empty set as per the definition below:

Page 210: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

195

12.3 Preference elicitation component

The preference elicitation component is responsible for the elicitation of a set of top-N preferences for

each system user in a group. With reference to the architecture, this component comprises of four

subcomponents.

1. Registered user retrieval component

2. Similar and trusted user identification component

3. Recommendation retrieval component

4. Top-N recommendation component

Each of these components is defined below.

Model Definition 12: Rating average

𝑟𝑢 = 𝑟𝑢,𝑖𝑖 ∈ 𝑢𝑟ℎ

𝑟𝑢,𝑖𝑖 ∈ 𝑢𝑟ℎ

𝑟𝑢 ∈ [0,5]

𝑟𝑢 = 0

Within the PerTrust model, the average of all ratings attributed by a user, u, to one or more

items, defined by the user rating history set, 𝑢𝑟ℎ , represented by the variable, 𝑟𝑢 , is formally

defined as per the following algorithm definition.

Variable Definition 12.1: Rating boundary definition

In the PerTrust model, the average of all ratings attributed by a user, u, is bound between 0 and

5. These integer values follow the same interpretation as the ones presented in boundary

definition 9.1. Therefore, this boundary definition can be formally defined as per the below:

Variable Definition 12.2: Empty rating history set

In the PerTrust model it is possible that a user, u’s, rating history set, represented by the

variable, 𝑢𝑟ℎ , may be an empty set. In this case, the average of all ratings for user, u, is 0.

Therefore, in this scenario, the average of all ratings for user, u, is defined as follows:

Page 211: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

196

12.3.1 Registered user retrieval component

The registered user retrieval component retrieves a list of all registered system users for the purpose

of forming a group. Therefore, the group model definition for the registered user retrieval component

is given below.

Group

A group in the PerTrust model is formally defined as more than one person collecting together over a

common goal, purpose, or activity. This is formally defined by Model Definition 13 below.

12.3.2 Similar and trusted user identification component

The similar and trusted user identification component identifies a list of all users calculated to be

similar to and trusted by a specific system user. This is determined for each group member. The

purpose of this component is to use these similar and trusted system users as a basis for determining

a personalised list of top-N recommendations for each group system user.

In the PerTrust model, a list of all similar users is identified by calculating each group member’s

similarity network. The definition for this is given below. Likewise, a list of all trusted users is identified

by calculating each group member’s trust network. It is additionally noted that the trusted users

identified as part of a group member’s trust network may be included based on an implicitly calculated

trust value. As a result, the definition for implicit trust calculation is also listed.

Similarity

In the PerTrust model, similarity, along with trust, is one of the most commonly used and necessary

model components. Similarity is defined as a determinant for how similar one user is to another user.

Model Definition 13: Group

𝐺 = {𝑢𝑎 ,𝑢𝑏 , … ,𝑢𝑈𝑁}

𝐺 > 1

Within the PerTrust model, a group, represented by the variable, G. consists of a set of more

than one users. This is formally defined below.

Variable Definition 13.1: Total number of users in a group

Within the PerTrust model, the total number of users within a group, G, must be greater than 1.

This extended definition is given below.

Page 212: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

197

In the PerTrust model, similarity is measured by how similar the user rating histories are between two

users. Therefore, the subset of recommendations items rated by both users is compared to see how

similar or dissimilar the attributed ratings are between them. The method by which similarity is

determined within the PerTrust model is through the Pearson correlation coefficient. This algorithm

and its similarity measure are formally defined in Model Definition 14 below.

As with trust, it is often the case in the PerTrust model, that a particular source is calculated to be

similar to multiple other target users. When this occurs, a similarity network is formed. The formal

definition for a similarity network in the PerTrust model is given in Model Definition 15 below.

Model Definition 14: Similarity valuation

𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑏 = 𝑟𝑢𝑏,𝑖 − 𝑟𝑢𝑏 . 𝑟𝑢𝑎,𝑖 − 𝑟𝑢𝑎 𝑖 ∈ 𝑖𝑠𝑖𝑚

𝑟𝑢𝑏,𝑖 − 𝑟𝑢𝑏 ⬚

𝑖∈ 𝑖𝑠𝑖𝑚 . 𝑟𝑢𝑎,𝑖 − 𝑟𝑢𝑎

⬚𝑖∈ 𝑖𝑠𝑖𝑚

𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑏 ∈ [−1,1]

𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑏 = 0

In the PerTrust model, the similarity measure between a user, 𝑢𝑎 , and a user, 𝑢𝑏 , is represented

by the variable, 𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑏 . Given a list of recommendations items rated by both users,

represented by variable, 𝑖𝑠𝑖𝑚 , the similarity measure can be determined as per the Pearson

correlation coefficient below:

Variable Definition 14.1: Similarity boundary definition

Within the PerTrust model, the similarity measure is bound between -1 and 1. A score of -1

indicates total dissimilarity between two users, whilst a score of 1 indicates total similarity

between users. Formally, the similarity measure for a user, 𝑢𝑎 , and a user, 𝑢𝑏 , represented by

the variable, 𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑏 , is bound as per the following definition:

Variable Definition 14.2: Empty rating set

In the PerTrust model, it is possible that the set defined to contain all recommendation items

rated by both a user, 𝑢𝑎 , and a user, 𝑢𝑏 , may be an empty set. In this instance, the similarity

measure between both these users is 0. Formally, therefore:

Page 213: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

198

Model Definition 15: Similarity network

𝑆𝑖𝑚𝑁 𝑢𝑎 = 𝑉,𝐸

𝑉 = {𝑢𝑎 ,𝑢𝑏 , … , 𝑢𝑈𝑁}

𝐸 = {𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑏 , 𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑐 ,… , 𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑈𝑁 }

𝐸 = {𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑏 , 𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑐 ,… , 𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑈𝑁 }

𝑠𝑖𝑚𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 = 0.6

The similarity network formed within the PerTrust model for a source user, 𝑢𝑎 , can be formally

defined as a graph, represented by the variable, 𝑆𝑖𝑚𝑁 𝑢𝑎 , with a defined set of vertices,

represented by the variable V, as well as a defined set of edges, represented by the variable E.

This is formally defined in the equation below:

Variable Definition 15.1: Vertices

Within the PerTrust model, the set of vertices is defined as the set of all users within the

similarity network. In this model, therefore, this includes the source user as well as all target

users for whom a similarity valuation has been calculated. Formally, this can be defined as

follows. For a source user, 𝑢𝑎 , and a set of target of users, {𝑢𝑏 , … ,𝑢𝑈𝑁}, the set of vertices, V, for

a similarity network, 𝑆𝑖𝑚𝑁 𝑢𝑎 , is defined as:

Variable Definition 15.2: Edges

Within the PerTrust model, the set of edges, E, for a similarity network, 𝑆𝑖𝑚𝑁 𝑢𝑎 , is defined as

the set of all similarity valuations calculated for a source user, 𝑢𝑎 , towards a set of target of

users, {𝑢𝑏 , … ,𝑢𝑈𝑁}. This is formally defined below:

Variable Definition 15.3: Similarity threshold definition

In the PerTrust model, the purpose of the similarity threshold value, represented by the

variable, 𝑠𝑖𝑚𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 , is to ensure that only those users which exceed a specified similarity

threshold value, are included in the similarity network for user, 𝑢𝑎 . As a result, all edges, E,

within the similarity network, 𝑆𝑖𝑚𝑁 𝑢𝑎 , must exceed this similarity threshold value. This is

formally defined below:

Where 𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑏 , 𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑐 ,… , 𝑠𝑖𝑚 𝑢𝑎 ,𝑢𝑈𝑁 > 𝑠𝑖𝑚𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 ,

In the PerTrust model, the similarity threshold is set to a value of 0.6. Formally, therefore:

Page 214: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

199

Implicit trust calculation

In the PerTrust model, trust is a foundational component. However, there are often times within the

model where there is no explicit trust valuation between two users. In such cases, there needs to be a

means by which an implicit valuation of trust can be derived. Without such an implementation being

defined, the model would only be limited to the explicit trust valuations attributed by each user.

In the PerTrust model, the means by which a trust valuation is determined is through the MoleTrust

algorithm. This algorithm implementation is formally defined in Model Definition 16 below.

12.3.3 Recommendation retrieval component

The main responsibility of the recommendation retrieval component is to determine a list of

recommendation items for each group member. This is achieved by going through the list of similar

and trusted system users for each group member and retrieving a list of recommendation items which

meet three criteria:

The list of recommendations items retrieved must have been explicitly rated by the relevant

trusted or similar system user.

The list of recommendation items must only contain those items not already rated by the

group member.

Each rating score within the list of retrieved recommendation items must exceed a defined

threshold.

Each of these criteria must be enforced by the recommendation retrieval component.

Model Definition 16: Implicit trust prediction with MoleTrust

𝑡𝑢𝑎,𝑢𝑏 = 𝑡𝑢𝑎 ,𝑢𝑐 . 𝑡𝑢𝑐 ,𝑢𝑏𝑢𝑐 ∈ 𝑇𝑁 𝑢𝑎

𝑡𝑢𝑎 ,𝑢𝑐𝑢𝑐 ∈ 𝑇𝑁 𝑢𝑎

𝑡𝑢𝑎,𝑢𝑏 = 0

In the PerTrust model, for a source user, 𝑢𝑎 , and a target user, 𝑢𝑏 , an implicit trust valuation,

represented by the variable, 𝑡𝑎,𝑏 , can be determined as per the MoleTrust algorithm defined

below:

Variable Definition 16.1: Empty or incomplete trust network definition

In the PerTrust model, it may not be possible to determine an implicit trust valuation as there

may be no trust paths linking two users together. In such a case, the implicit trust valuation is

set to a value of 0. Formally, therefore, this scenario would result in a definition as follows:

Page 215: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

200

Once a list of recommendation items meeting these three criteria has been obtained, the

recommendation retrieval component applies a weighting factor to the rating score of each

recommendation item. The weighting factor applied is either the system user’s historical rating for that

item or a calculated predicted trust-based rating valuation. The latter applies if the system user has

not yet rated the relevant recommendation item.

After this weighting factor has been applied to the rating score of each of the identified

recommendation items for each group member, the recommendation retrieval component orders

these lists from the highest rating to the lowest rating.

Therefore, for the recommendation retrieval component, the model definition given is for trust-based

rating prediction.

Trust-based rating prediction

In the PerTrust model, the means by which rating prediction occurs is through the implementation of

the EnsembleTrustCF algorithm. This algorithm combines both the implicit trust valuation prediction

algorithm as well as the similarity valuation algorithm. As a result, this ensures that as many users as

possible are queried when trying to predict a rating score for an item. This means that both trusted

users as well as similar users are queried when predicting a rating valuation for a specific user for a

recommendation item.

The EnsembleTrustCF algorithm, as it is implemented in the PerTrust model is given in Model

Definition 17 below.

Model Definition 17: Rating prediction with EnsembleTrustCF

𝑝𝑢𝑎,𝑖 = 𝑟𝑢𝑎 + 𝑡𝑢𝑎,𝑢𝑏 𝑟𝑢𝑏,𝑖 − 𝑟𝑢𝑏 + 𝑠𝑖𝑚𝑢𝑎,𝑢𝑏

𝑟𝑢𝑏,𝑖 − 𝑟𝑢𝑏 𝑢𝑏 ∈ 𝑆𝑖𝑚𝑁 𝑢𝑎 𝑢𝑏 ∈ 𝑇𝑁 𝑢𝑎

𝑡𝑢𝑎,𝑢𝑏 + 𝑠𝑖𝑚𝑢𝑎,𝑢𝑏𝑢 ∈ 𝑆𝑖𝑚𝑁 𝑢𝑎 𝑢𝑏 ∈ 𝑇𝑁 𝑢𝑎

𝑝𝑢𝑎,𝑖 ∈ [0,5]

In the PerTrust model, the EnsembleTrustCF algorithm is formally defined to calculate a

predicted rating, 𝑝𝑢𝑎,𝑖 , for a user, 𝑢𝑎 , and a recommendation item, i¸ as per the algorithm

definition below:

Variable Definition 17.1: Predicted rating variable boundary definition

In the PerTrust, the predicted rating, 𝑝𝑢𝑎,𝑖 , for a user, 𝑢𝑎 , and a recommendation item, i, is

bound between a rating valuation of 0 and 5. This is formally defined as follows:

Page 216: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

201

12.3.4 Top-N recommendation component

The top-N recommendation returns a final top-N list of personalised recommendations for each group

member. This list is used as the basis for a group recommendation. With reference to the PerTrust

model, this component goes through the recommendation lists of each specific system user and takes

the highest rated top-N recommendation items from each list. Should there be duplicate

recommendation items in the top-N list, it is the responsibility of this component to remove these and

to select the next highest recommendation item in the list thereafter.

12.4 Aggregation component

The aggregation component has a twofold purpose in the PerTrust model.

Determine a final top-N list of recommendations for the group as a whole.

Determine the satisfaction levels of each group member and the group as a whole

With reference to the architecture, this component comprises of four subcomponents to implement

these requirements.

1. Rating matrix formation component

2. Personality and trust influence component

3. Aggregation model component

4. Satisfaction component

12.4.1 Rating matrix formation component

The rating matrix formation component forms a rating matrix by making use of the top-N

recommendation lists determined for each group member in the preference elicitation component.

The rating matrix formation component has two main responsibilities.

To form an empty rating matrix. This is done by performing two steps.

o The addition of each group member as a separate column in the matrix.

o The addition of each distinct recommendation item identified within the top-N

recommendation list for each group member as a column.

To populate the rating matrix with rating scores for each recommendation item for

each group member. If a group member has already participated in or experienced a

recommendation item and has also explicitly attributed a rating score to that item, then this

rating score is populated. However, if this is not the case, then a rating score is generated for

them by making use of the EnsembleTrustCF algorithm. After this process has been

concluded, a fully populated rating matrix is formed.

Page 217: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

202

12.4.2 Personality and trust influence component

The personality and trust influence component takes the fully populated rating matrix formed in the

previous step and weights each rating score in the matrix with the personality and trust factor scores

of every other group member. In the PerTrust model, the personality factor is the CMW value of the

particular other group member and the trust factor is the explicit or implicit trust value or the similarity

value calculated for that other group member.

All model definitions related to the personality and trust influence component are given below.

Conflict mode weight (CMW) value

The CMW value makes use of the dominant and least dominant personality types determined for a

system user to calculate an assertiveness and cooperativeness score for each personality type. The

purpose of this is so that the most dominant and least dominant personality types identified for the

user can be mapped to a relevant numeric assertiveness and cooperativeness score.

The relevant assertiveness and cooperativeness scores for each dominant and least dominant type

have already been defined and implemented in the research done by Recio-Garcıa et al. (2009). As a

result, the same numeric weightings and mappings are used within the PerTrust model. These are

formally defined in Model Definition 18 below.

Page 218: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

203

After both the dominant and least dominant personality types have been defined for a user and after

an assertiveness and cooperativeness score has been determined for them, the conflict mode weight

(CMW) value can be calculated for the user. The CMW value is used to determine a final numeric

value which represents a user’s personality make up. The CMW value is formally defined in Model

Definition 19 below.

Model Definition 18: Assertiveness and cooperativeness scores

In the PerTrust model, the assertiveness and cooperativeness scores for a user, u, represented

by variables, 𝑢𝑎𝑠𝑠𝑒𝑟𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 and 𝑢𝑐𝑜𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 respectively, are formally determined by

summing the individual assertiveness and cooperativeness scores for each dominant and least

dominant personality type determined for user, u, defined by variables, 𝑢𝑑𝑝𝑡 and 𝑢𝑙𝑑𝑝𝑡 .

The relevant assertiveness and cooperativeness scores used in the PerTrust model are defined

in Table 12.1 below.

Personality

Type

Assertiveness Cooperativeness

Dominant Least Dominant Dominant Least Dominant

Competing 0.375 -0.075 -0.150 0.000

Collaborating 0.375 -0.075 0.375 -0.075

Compromising 0.000 0.000 0.000 0.000

Accommodating -0.375 0.075 -0.375 0.075

Avoiding -0.150 0.000 0.375 -0.075

Table 12.1: Assertiveness and cooperativeness personality type mappings (Recio-Garcıa et al., 2009

Therefore, by summing the corresponding assertiveness and cooperativeness factors for each

dominant and least dominant personality type, a final assertiveness and cooperativeness score

can be determined for user, u.

Personality

Type

Assertiveness Cooperativeness

Dominant Least Dominant Dominant Least Dominant

Competing 0.375 -0.075 -0.15 0

Collaborating 0.375 -0.075 0.375 -0.075

Compromising 0 0 0 0

Accommodating -0.375 0.075 -0.375 0.075

Avoiding -0.15 0 0.375 -0.075

Page 219: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

204

Group-based rating prediction with personality and trust

In the PerTrust model, an important consideration is that of the impact of both trust and personality

within the group recommendation process. The chosen method of implementation for a group-based

rating prediction in the PerTrust model is the delegation-based rating prediction (DBRP) algorithm.

The formal definition of its implementation is given in Model Definition 20 below.

Model Definition 19: Conflict mode weight (CMW) value

𝐶𝑀𝑊𝑢 =1 + 𝑢𝑎𝑠𝑠𝑒𝑟𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠 − 𝑢𝑐𝑜𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑣𝑒𝑛𝑒𝑠𝑠

2

𝐶𝑀𝑊𝑢 ∈ [0,1]

Within the PerTrust model, the conflict mode weight for a user, u, represented by the variable,

𝐶𝑀𝑊𝑢, is formally defined as per the equation below:

Variable Definition 19.1: Conflict mode weight boundary definition

The conflict mode weight value variable is a real number bound between 0 and 1. A value of 0

indicates that the user is very cooperative, whereas a value of 1 indicates a selfish user. This is

formally defined in the PerTrust model by the following boundary definition:

Model Definition 20: Personality and trust-based rating prediction

𝐷𝐵𝑅𝑃𝑢𝑎,𝑖 =1

𝑡𝑢𝑎,𝑢𝑏𝑢𝑏∈𝐺 . 𝑡𝑢𝑎,𝑢𝑏 𝑟𝑢𝑏,𝑖 + 𝐶𝑀𝑊𝑢𝑏

𝑢𝑏 ∈𝐺˄ 𝑢𝑏≠𝑢𝑎

𝐷𝐵𝑅𝑃𝑢𝑎,𝑖 ∈ [0,5]

In the PerTrust model, a group-based rating prediction is determined by making use of the

delegation-based rating prediction algorithm. This algorithm considers the trust and

personality type of each member in the group and leverages this to determine a rating

prediction for an item. Therefore, a group-based rating, for a user, 𝑢𝑎 , and a recommendation

item, i, represented by the variable, 𝐷𝐵𝑅𝑃𝑢𝑎,𝑖, is formally defined as follows:

Variable Definition 20.1: Predicted rating variable boundary definition

In the PerTrust, the group-based rating score for a user, 𝑢𝑎 , and a recommendation item, i, is

bound between a score of 0 to 5. Formally, therefore, this is defined as per the boundary

definition below:

Page 220: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

205

12.4.3 Aggregation model component

The aggregation model component takes the personality and trust affected rating matrix formed in the

previous function and aggregates this into a single recommendation list for the group as a whole. This

occurs through the application of a relevant aggregation model.

All model definitions related to the aggregation model component is given below.

Aggregation model

There are a number of possible aggregation models to be potentially implemented. However, after an

evaluation of the PerTrust model, the chosen method of implementation for the aggregation process

is the plurality voting model. This is detailed further in the next chapter on evaluation. However, for the

purposes of the PerTrust model, it is formally defined in Model Definition 21 below.

12.4.4 Satisfaction component

The satisfaction component determines the satisfaction levels of both individual group members as

well as the group as a whole. All model definitions related to the satisfaction component are given

below.

Model Definition 21: Plurality voting aggregation model

𝐺𝑅 = 𝑃𝑉 𝐷𝐵𝑅𝑃𝑢,𝑖𝑖 ∈ 𝑃𝑇𝐴𝑅

In the PerTrust model, a set of personality and trust affected rates for a recommendation item,

i, represented by the variable, PTAR, where PTAR = {𝐷𝐵𝑅𝑃𝑢𝑎,𝑖0 ,𝐷𝐵𝑅𝑃𝑢𝑏,𝑖1 , … ,𝐷𝐵𝑅𝑃𝑢𝑈𝑁,𝑖𝐼𝑁}, is

aggregated into a single rating valuation by making use of the plurality voting model algorithm.

The plurality voting aggregation algorithm, represented by the variable, PV, defined for a

group, G, outputs a group recommendation, GR, where GR = {𝑟𝐺,𝑖0, 𝑟𝐺 ,𝑖1

, … , 𝑟𝐺,𝑖𝐼𝑁}, as per the

definition below:

Variable Definition 21.1: Formal defining the plurality voting model

In the PerTrust model, the plurality voting aggregation model is defined as an aggregation

model whereby group members cast votes for each of their highest rated individual

recommendations. The recommendations which receive a majority vote at any one point are

the ones which are selected. Should there be a case where there is a tie in the number of votes,

both recommendations are selected.

Page 221: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

206

Individual satisfaction

In the PerTrust model, the individual satisfaction of each member within a group for a given group

recommendation is calculated based on the work of Masthoff (2004). In this implementation, individual

satisfaction is determined by comparing a group member’s individual preferences with the group’s

recommendation. If an individual group member has explicit or implicitly derived rating valuations

which are high for these group recommendations, then the group member’s satisfaction is calculated

to be quite high. Conversely, if the group member has explicitly attributed or had implicitly derived

ratings which are low for these group recommendations, the group member will have a low level of

satisfaction.

Another consideration is that of linearity. This consideration states that the difference between a rating

valuation score of 4.5 and 5.0 and 3.0 and 3.5 is different. In this case, the difference between the

former rating evaluations is larger than the difference between the latter rating evaluations, since it is

more impactful. As a result, a rating valuation mapping is proposed to cater for this consideration. In

this implementation, rating valuations are mapped to a numeric scale whereby the numeric

differences on the numeric scale are greater than the differences defined in the actual rating

valuation. In this way, the impact differences between rating scores is adequately catered for.

Therefore, the chosen implementation for individual satisfaction within the PerTrust model is defined

in Model Definition 22 below.

Page 222: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

207

Model Definition 22: Individual satisfaction

𝑠𝑎𝑡𝑢𝑎 = 𝑟𝐺,𝑖𝑖 ∈ 𝐺𝑅

max 𝑟𝑢𝑎 ,𝑖 𝑖 ∈ 𝐼𝑅

𝑠𝑎𝑡𝑢𝑎 ∈ [0,1]

In the PerTrust model, the individual satisfaction of a user, 𝑢𝑎 , with a group recommendation

rating set, GR, where GR = {𝑟𝐺,𝑖0, 𝑟𝐺,𝑖1

, … , 𝑟𝐺,𝑖𝐼𝑁}, is represented by the variable, 𝑠𝑎𝑡𝑢𝑎. Formally,

given the individual rating set for user, 𝑢𝑎 , represented by the set IR, where IR =

{𝑟𝑢𝑎,𝑖0 , 𝑟𝑢𝑎,𝑖1 , … , 𝑟𝑢𝑎,𝑖𝐼𝑁}, individual satisfaction can be determined as per the given definition

below.

Model Definition 22.1: Satisfaction boundary definition

In the PerTrust model, the satisfaction score for a user, 𝑢𝑎 , represented by the variable, 𝑠𝑎𝑡𝑢𝑎 , is

bounded between 0 and 1. A satisfaction score of 0 indicates total dissatisfaction with the

group recommendation. A satisfaction score of 1 indicates total satisfaction with the group

recommendation. Formally, this is defined as follows:

Model Definition 22.2: Numeric rating scale definition

In the PerTrust model, when satisfaction is being determined, the rating valuation scores for

each item are mapped to a numeric scale to ensure that the differences between rating

valuation scores are adequately catered for. The numeric scale mapping table used in the

PerTrust model is presented in Table 12.2 below.

Rating Valuation 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

Mapped Rating Valuation -25 -16 -9 -4 -1 1 4 9 16 25

Table 12.2: Satisfaction – Numeric rating scale definition

By this definition, should any rating valuation occur between the defined scores then the lesser

score is used. For example, if a rating valuation was 1.7, then the mapping score for 1.5 would

be used. In this case, the mapping score would be -9.

Model Definition 22.3: Maximum individual rating scores definition

In the PerTrust model, the denominator for the satisfaction algorithm is defined as the

maximum ratin score valuations within the roup member’s individual preferences list. These

maximum rating scores are summed for as many items there are in the group recommendation

list.

Page 223: PerTrust: leveraging personality and trust for group recommendations

Chapter 12 The PerTrust model

208

Group satisfaction

In the PerTrust model, group satisfaction is a measure used to determine how satisfied the group is,

as a whole, with the calculated group recommendation. In this approach, group satisfaction is defined

as the difference between the average of each group member’s individual level of satisfaction and the

standard deviation between each user’s individual levels of satisfaction and the average of all

individual levels of satisfaction.

This definition is formally presented for the PerTrust model in Model Definition 23 below.

12.5 Conclusion

In this chapter, all the components within the PerTrust model were formally defined. This included a

detailed presentation of the registration component, the preference elicitation component, as well as

the aggregation component.

In the next chapter, each of the components defined within the PerTrust model are evaluated with the

intention of determining the effectiveness and suitability of the model for an online group

recommender system.

Model Definition 23: Group satisfaction

𝑠𝑎𝑡𝐺 = 𝑠𝑎𝑡𝑢𝑢 ∈ 𝐺

𝐺 −

𝑠𝑎𝑡𝑢 − 𝑠𝑎𝑡𝐺 𝑢 ∈ 𝐺

𝐺

𝑠𝑎𝑡𝐺 ∈ [0,1]

In the PerTrust model, the group satisfaction score for a group, G, where G = {𝑢𝑎 , 𝑢𝑏 , … , 𝑢𝑈𝑁},

with a with a group recommendation rating set, GR, where GR = {𝑟𝐺,𝑖0, 𝑟𝐺 ,𝑖1

, … , 𝑟𝐺,𝑖𝐼𝑁}, is

represented by the variable, 𝑠𝑎𝑡𝐺 . Formally, the satisfaction score for a group is defined as

follows.

Model Definition 23.1: Satisfaction boundary definition

In the PerTrust model, the satisfaction score for a group, 𝐺 represented by the variable, 𝑠𝑎𝑡𝐺 ,

is bounded between 0 and 1. A satisfaction score of 0 indicates total dissatisfaction with the

group recommendation. A satisfaction score of 1 indicates total satisfaction with the group

recommendation. Formally, this is defined as follows:

Page 224: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

Page 225: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

210

13.1 Introduction

In the previous chapter, the PerTrust model was formally defined with each of the main components

discussed. The main components discussed were the registration, preference elicitation, and

aggregation components.

In this chapter, the core components of the PerTrust group recommender model are evaluated. The

purpose of this evaluation is to determine the impact and performance of the model with regards to

group recommendation. The intention is to determine whether the proposed PerTrust model is a

viable candidate for group recommendation when compared against a number of other group

recommender models.

The method followed for the evaluation of the PerTrust model is that of an offline evaluation against a

dataset captured by the author for the purposes of this research (Shani & Gunawardana, 2011). An

offline evaluation is defined as an evaluation done independent of system user interaction (Shani &

Gunawardana, 2011). The purpose of an offline evaluation of the PerTrust model is to determine the

effectiveness of the model by assuming similar system user behaviour as per the system user data

captured within the dataset (Shani & Gunawardana, 2011).

The evaluation conducted in this chapter is different to those typically done on recommender systems.

Most recommender systems evaluate the effectiveness of a recommender system by hiding a number

of ratings in the system and then having the recommender system try to predict those hidden ratings

(Shani & Gunawardana, 2011). Effectiveness is, therefore, based on the error margin between the

hidden rating and the predicted rating (Shani & Gunawardana, 2011). In the PerTrust model

evaluation, however, a user and group’s preferred recommendations are explicitly captured within the

dataset. As a result, the effectiveness of the PerTrust model is based on whether the model can

calculate recommendations that match the explicitly captured recommendations. This evaluation

process is detailed further in later sections.

For the purposes of evaluating the PerTrust model, the following model components are evaluated in

this chapter.

o Overall model. This evaluates the overall performance of the PerTrust model when

determining group recommendations. The metric used to measure performance is the

accuracy of the results returned by the PerTrust model.

o Trust. Trust is used in both in the similar and trusted user identification component and the

recommendation retrieval component to elicit a top-N set of recommendations for each user.

It is additionally implemented in the personality and trust influence component when affecting

rating scores. The impact of implementing trust in these PerTrust model components is

evaluated.

Page 226: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

211

o Personality. Personality is implemented in the personality and trust influence component as

rating scores are influenced using the Conflict Mode Weight (CMW) value. The impact of

implementing personality in this PerTrust model component is evaluated.

o Satisfaction. Satisfaction is implemented in the satisfaction component. In this component, a

satisfaction score is calculated for each system user individually as well as for the group as a

whole. The impact on satisfaction in making use of the PerTrust model is evaluated.

In order to present the evaluation conducted against the PerTrust model, this chapter is structured as

follows. In the first section, the dataset used to perform this evaluation is defined. Next, a number of

considerations with regards to the evaluation are presented. In addition to this, the group

recommender models against which the PerTrust model is evaluated are defined. Next, the results of

the evaluation are presented for each PerTrust model component. Following this, the configuration of

a number of variables in the PerTrust model is defined. Finally, the section concludes.

13.2 Dataset

The definition of the dataset is important as it influences the specific results of the evaluation. What is

challenging for this dataset is that it has to contain user data, recommendation item data, rating

valuation data, group data, as well as trust data in order to evaluate group recommendation.

Therefore, this section discusses the selection of a dataset for this evaluation, how it was populated

with the required group recommendation data, and then identifies the inherent deficiencies of the

dataset. Each of these topics is defined below with the intention of presenting a candidate dataset for

the evaluation of the PerTrust model.

13.2.1 Selecting a dataset for evaluation

Selecting a dataset for the evaluation of a group recommender model is difficult as there are no

publicly available datasets which contain the necessary group recommendation data (Pera & Ng,

2012). Therefore, most researchers take one of two possible approaches in forming a dataset for

group recommendation (Najjar & Wilson, 2011).

o Infer groups and relevant group data from a publicly accessible dataset developed for

single user recommender systems (Najjar & Wilson, 2011). The advantage is that the

dataset is readily available and accessible to anyone. This makes it easy to conduct offline

experiments (Shani & Gunawardana, 2011). The disadvantage is that all the data required for

evaluation has to be inferred. This means that the dataset does not contain actual user and

group data (Najjar & Wilson, 2011; Shani & Gunawardana, 2011).

o Create a dataset by engaging with actual users and groups and capturing the relevant

data in this way (Najjar & Wilson, 2011; Shani & Gunawardana, 2011). The main

advantage is that this approach allows for the capturing of actual, live, and relevant data,

suitable to the specific requirements of the group recommender system (Shani &

Page 227: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

212

Gunawardana, 2011). The disadvantage of this approach, however, is that it requires the

active participation of many users who have to give up their time and effort (Shani &

Gunawardana, 2011).

For the purposes of evaluating the PerTrust model, the method chosen to select a dataset follows the

second option. The reason for this is the central focus of the system is to determine the impact of trust

and personality within the group recommendation process. If the first option is selected, not only

would groups have to be inferred, but trust as well as personality. This would make the dataset too

unreliable and too difficult to reflect true and accurate results.

13.2.2 Method of capturing data

The data captured for the PerTrust model evaluation dataset consists of user, group,

recommendation, trust, and personality data collected over a two week period. The context of the

PerTrust model for this evaluation was tourism. Therefore, all of the recommendation items and

ratings within the dataset revolve around tourist attractions. A summary of the dataset is presented in

Table 13.1 below.

Total Users 37

Total Groups 12

Total Two-People Groups 2

Total Three-People Groups 6

Total Four-People Groups 2

Total Five-People Groups 1

Total Six-People Groups 1

Total Tourist Attractions 62

Total Rating Valuations 667

Total Trust Valuations 228

Table 13.1: Dataset summary

In total, 37 users participated in the data capturing process with 12 groups formed. The intention with

regards to the formation of groups was to ensure that the group sizes were not too large as the

PerTrust model is only intended for smaller groups. For the data capturing process, participants were

asked to either complete a form manually or to capture data via the online prototype of the system.

These methods required the capturing of:

Basic user details. This is information such as the participant’s name and surname.

Personality details. This personality information is obtained upon completion of the TKI

personality test, where a dominant and least dominant personality type is determined.

Social relations details. This is all the trust statements submitted by the system user to

other system users. Trust is a numeric value between 1 and 10.

Page 228: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

213

Rating history details. This is the rating data of all tourist attractions visited attributed

with a corresponding rating between 1 and 5.

Top three user preference details. This is a user’s top-3 tourist attractions that they

would like to visit and have not yet been to. These details follow a similar process to the

data capturing methodology implemented by Quijano-Sanchez et al. (2013) with their

evaluation.

Top three group preference details. This is a group’s top-3 list of tourist attractions that

a group would like to visit. Again, these details were similar to the data capturing

methodology implemented by Quijano-Sanchez et al. (2013).

The result of the capturing of this group recommendation data meant that a total of 667 rating

valuations were contributed and that a total 228 trust valuations had been attributed.

13.2.3 Limitations of the dataset

Limitations of the dataset are now raised so that their effect on the PerTrust evaluation can be

referenced and discussed in further sections. The identified limitations of this dataset are as follows.

a) Data sparsity

Data sparsity refers to the lack of rating contributions submitted within the dataset (Quan &

Hinze, 2008; Victor, 2010). This causes a lack of accuracy or quality in the results determined

by the system. In this particular dataset, out of a possible 2294 ratings which could have been

submitted, only 667 ratings were submitted by system users. This results in a dataset

coverage of 29.08%.

b) Cold start items

In this particular dataset, cold start items refer to those items which have received no rating

scores (Bhuiyan, 2011; Quan & Hinze, 2008). In this dataset, there are only two cold start

items. However, one of these cold start items was selected as one of the top-3 group

preferences by a total of 25% of all groups. This immediately makes group recommendation

more difficult since these particular cold start items will never be returned as a possible group

recommendation.

c) Group preference selections

The group recommendation process is not always intuitive, especially from a personality

perspective. For example, even though there were people who had competing personalities

individually, in the specific context of deciding a group recommendation, fairness was being

promoted more. However, given another context, such as in a corporate working

environment, the competing personality type would come through more. Another noticeable

characteristic is that some participants became aware of the role of personality in the process

Page 229: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

214

of group recommendation. As a result, these participants would compensate and promote a

more cooperative approach with the group.

In this section, the dataset used to perform the PerTrust model evaluation was analysed with the

intention of detailing the dataset and analysing the inherent limitations of the dataset. In the next

section, the evaluation of the PerTrust model on the defined dataset is presented.

13.3 Evaluation considerations and models

The focus of this section is to put forward a number of considerations with regards to the PerTrust

evaluation as well as to present the various group recommender models which are evaluated. This

background to the PerTrust evaluation results in a better understanding when the various model

component evaluation results are presented in the next section.

13.3.1 Base experiments

In the previous chapter on the PerTrust model, it was highlighted that various factors would be

evaluated in order to determine the best possible implementation for the PerTrust model. The specific

factors for which this needs to be done is the N variable in each top-N function, the similarity

threshold, and the aggregation model. Each of these is discussed below.

a) The N variable in each top-N function

In the PerTrust model, the N variable represents the number of recommendations

retrieved for a component. Relevant model components are the preference elicitation

component whereby a top-N list of recommendations is retrieved and the aggregation

component whereby a final top-N list of group recommendations is returned.

In order to define the best possible configuration of the N variable in a group

recommender model, each experiment is run with three scenarios: where the N variable

is set to 10, the N variable is set to 20, and the N variable is set to 30. The purpose of

running these scenarios is to determine if a group recommender model performs better if

more recommendations are considered in the group preference elicitation model

component or if less are considered.

b) The similarity threshold

In the PerTrust model, similarity is often used to define similar users. Therefore, in order

to determine a similarity threshold, each experiment is run with thresholds varying

between 0 and 0.6 with steps of 0.1 in between. Additionally, these thresholds are

implemented for every top-N recommendation. The purpose of these experiments is to

determine whether the performance of group recommendation is improved if more users

Page 230: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

215

are included in the preference elicitation model component or if it is better that only the

most similar users are considered.

c) Aggregation model

In the PerTrust model, the aggregation model component implements an aggregation

model as part of deriving the final group recommendation. It was noted in the chapter on

aggregation that a suitable aggregation model would have to be evaluated before

nominating a candidate aggregation model for the PerTrust model.

In order to determine a candidate aggregation model, a number of group recommender

model evaluations are executed for every defined aggregation model. These aggregation

models were formally defined in chapter 9 and are as follows: the additive utilitarian

model, the multiplicative utilitarian model, the average model, the average without misery

model, the least misery model, the most pleasure model, the fairness model, the plurality

voting model, the approval voting model, the Borda count model, and the Copeland rule

model.

13.3.2 Evaluation models

In this section, the group recommender models against which the PerTrust model is to be run are

presented and discussed. For the PerTrust evaluation, the other group recommender models used

are based on the work of Quijano-Sanchez et al. (2013). The purpose for this is to determine whether

this research improves upon their personality and trust-based group recommender model. It is,

however, noted that these models have been adapted for the purposes of this model evaluation. The

main adaptation being that instead of using a Facebook-based implementation for trust, the defined

model method of using the MoleTrust algorithm to infer trust is used instead.

In comparing the group recommender models of Quijano-Sanchez et al. (2013) and the PerTrust

group recommender model, the models to be evaluated can be split into two main categories. The

first category comprises of all those models whereby the preference elicitation component makes use

of collaborative filtering to elicit a top-N set of group member recommendations. The second category

comprises of the PerTrust model preference elicitation component definition in which trust is used as

the means to elicit a top-N set of group recommendations. As a result, the group recommender

models relevant to each of these categories are presented below.

a) Collaborative filtering-based group recommender evaluation models

The collaborative filtering-based group recommender models used in the PerTrust

evaluation are as follows:

Collaborative filtering - base (CF-Base): This is the base model for

collaborative filtering. As a result, no personality or trust enhancements are made

Page 231: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

216

in this model. Consequently, this model implementation simply elicits preferences

using collaborative filtering and is aggregated thereafter to produce a final group

recommendation.

Collaborative filtering - personality only (CF-Personality): In this model, the

CF-Base model is used as a basis upon which to enhance the model with

personality. Personality is implemented in this model using the Conflict Mode

Weight (CMW) value. Because only personality is being evaluated, the trust

factor is not considered.

Collaborative filtering - trust only (CF-Trust): This model is similar to the CF-

Personality model, except that the impact of trust alone is evaluated instead of

personality. As a result, the personality factor is not considered in this model.

Collaborative filtering - personality and trust (CF-PersonalityTrust): This

model evaluates the combined impact of personality and trust. Therefore, both of

these factors are applied on top of the CF-Base model to determine their

cumulative influence.

b) Trust-based group recommender evaluation models

The trust-based group recommender models used in the PerTrust evaluation are as

follows:

Trust - base (Trust-Base): This is the base model for trust. As a result, no

personality or trust enhancements are made in this model with the exception of

the trust-based preference elicitation process. Consequently, this model

implementation simply elicits preferences using the defined PerTrust preference

elicitation model component methodology.

Trust - personality only (Trust-Personality): In this model, the Trust-Base

model is used as a basis upon which to enhance the model with personality.

Personality is implemented in this model using the Conflict Mode Weight (CMW)

value. Because only personality is being evaluated, the trust factor is not

considered.

Trust - trust only (Trust-Trust): This model is similar to the Trust-Personality

model, except that the impact of trust alone is evaluated instead of personality.

As a result, the personality factor is not considered in this model.

Trust - personality and trust (Trust-PerTrust): This model evaluates the

combined impact of personality and trust. Therefore, both of these factors are

applied on top of the Trust-Base model to determine their cumulative influence.

This is defined as the PerTrust model for evaluation.

Now that each of the models to be implemented in the evaluation have been defined, the evaluation

results of each of these models can be presented and discussed. This is the focus of the next

chapter.

Page 232: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

217

13.4 Evaluation results of the PerTrust model using the dataset

In this section, the evaluation performed on the dataset as well as the consequent results are

presented. There are three main model component experiments run for this evaluation.

o Overall model performance. In this experiment, the overall performance of the group

recommender system is evaluated. The means of this evaluation is through measuring how

accurate the group recommender model is in calculating group recommendations.

o Implementation of personality and trust. In this experiment, the individual and combined

impact of personality and trust in the group recommender model is evaluated.

o Satisfaction. In this experiment, individual and group satisfaction is evaluated to determine

how satisfactory the results returned by each group recommender model are.

Each of these experiments as well as their consequent results is detailed in the subsections below.

13.4.1 Experiment 1: Overall model performance

In this model experiment, the overall group recommender model performance is determined by

measuring accuracy. In this experiment, the measurement of accuracy is in line with the evaluation

conducted by Quijano-Sanchez et al. (2013). In their evaluation, accuracy was determined by

comparing the results returned by the group recommender model with the actual top three group

preference details captured. In this evaluation, therefore, a similar approach is adopted whereby

accuracy is measured by presenting two different points of data.

One hit match. This data reference presents accuracy as the number or percentage of times

there is at least one match between recommendation items in the calculated group

recommendations and the actual top three group preference details.

Two hit matches. This data reference presents accuracy as the number or percentage of

times there are at least two matches between recommendation items in the calculated group

recommendations and the actual top three group preference details.

The results and a number of observations for both of these accuracy measurements are presented

below. Thereafter, the results are evaluated.

a) One hit match performance

In this measurement of accuracy, the one hit match performance of each group recommender

model is evaluated in two different ways:

o Analysing the average percentage of times there is at least one match between the

calculated group recommendations and the actual top three group preference details.

This is detailed across all groups and broken down per aggregation model. The result

of this evaluation is presented in Figure 13.1 below.

Page 233: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

218

o Analysing the number of times a specific one hit match percentage is reached by each

group recommender model. This is detailed across all groups. The result of this

evaluation is presented in Figure 13.2 below.

Figure 13.1: Average one hit match percentage

Figure 13.2: Number of hits per one hit match percentage

With reference to Figure 13.1, there are a number of observations made with regards to the

average one hit match percentage performance.

o The trust-based models mostly outperform the collaborative filtering-based

models. The only exception where this is not the case is for the approval aggregation

model. Much of this increased performance can be attributed to the impact of trust

over a sparse dataset. In this particular evaluation on the dataset, the best performing

group recommender model is that of the Trust-PerTrust model. This model is either

equivalent to other models or outperforms them. The only exception of this is for the

average with threshold, approval, and Borda count aggregation models.

Page 234: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

219

o Overall, the one hit percentage performance seems quite low. There are a

number of reasons for this.

The sparsity of the dataset makes it more difficult to generate an accurate

group recommendation.

25% of the groups selected an item not rated by any user on the system. This

excludes the potential of the item being returned in a group recommendation.

This experiment only reflects an average performance. In the best performing

instance for example, the one hit performance for the PerTrust model

increases to 83.33%. This can be seen in Figure 13.2.

o The CF-PersonalityTrust model performs worse than the CF-Base model. An

exception to this is found in the multiplicative aggregation model. The causes of this,

especially for the collaborative filtering-based models, are detailed in the model

component experiments focusing on the impact of personality and trust.

With reference to Figure 13.2, there are a number of observations made with regards to the

number of hits per one hit match percentage.

o The trust-based models mostly outperform the collaborative filtering-based

models. There is a much higher and better quality of hits per one hit match

percentage for the trust-based models compared to the collaborative filtering-based

models. As per the observations of Figure 13.1, the best performing model is the

Trust-PerTrust model achieving the best results for the 83.333%, 75.000%, and

66.667% categories.

o The CF-PersonalityTrust model performs worse than the CF-Base model. As per

the observations of Figure 13.1, a similar trend occurs.

b) Two hit match performance

In this measurement of accuracy, the two hit match performance of each group recommender

model is evaluated in two different ways:

o Analysing the average percentage of times there is at least two matches between the

calculated group recommendations and the actual top three group preference details.

This is detailed across all groups and broken down per aggregation model. The result

of this evaluation is presented in Figure 13.3 below.

o Analysing the number of times a specific two hit match percentage is reached by each

group recommender model. This is detailed across all groups. The result of this

evaluation is presented in Figure 13.4 below.

Page 235: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

220

Figure 13.3: Average two hit match percentage

Figure 13.4: Number of hits per two hit match percentage

With reference to Figure 13.3, there are a number of observations made with regards to the

average one hit match percentage performance.

o The performance is considerably lower than that of the one hit match

performance. The reason for this is that it is more difficult to achieve two out of three

matches than it is to achieve one out of three matches.

o The performance of the collaborative filtering-based models is more competitive

across each evaluation point. For the plurality, approval, Borda count, and Copeland

rule aggregation models, the collaborative filtering-based models outperform the trust-

based models.

With reference to Figure 13.4 there are a number of observations made with regards to the

number of hits per two hit match percentage.

Page 236: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

221

o The collaborative filtering-based models achieve the best performance. This is in

contrast to the results of Figure 13.2. In this particular performance evaluation, the CF-

Base model surprisingly performs best. This could be attributed to the fact that

personality and trust hinder the performance of two hit matches as they filter out

potential results.

c) Evaluation of results

Based on the accuracy results of the one hit and two hit match performance evaluations on

each group recommender model, a number of conclusions are made.

o For one hit percentage matches, the trust-based models generally reflect a

better performance than the collaborative filtering-based models. Specifically, the

best performing trust-based model was the Trust-PerTrust model across both one hit

match evaluations. The implementation of both trust and personality in this model is

the cause of this increased result.

o For two hit percentage matches, the collaborative filtering-based models

generally reflect a better performance than the trust-based models. This is most

likely due to the fact that the implementation of trust and personality filters out potential

results, making it less likely that two matches are hit.

o As far as accuracy is concerned in this dataset, the most suitable group

recommender model for the PerTrust model are the trust-based models and

specifically, the Trust-PerTrust model. The main reason for this is that the trust-

based models perform better for one hit percentage matches. This is important to note

because it means that the collaborative filtering-based models only achieve a better

two hit match based upon the one hit match results. Therefore, the greater number of

one hit percentage matches, the more times at least one match is obtained. For

example, if there is only one hit 50% of the time, then it does not increase the

performance of the model if there is a resultant 20% two hit performance.

13.4.2 Experiment 2: Personality and trust

The purpose of this experiment is to determine the influence of trust and personality in a group

recommender model. In the previous chapter, the implementation of trust and personality was

formally defined in the PerTrust model.

The model components affected by trust are the similar and trusted user identification

component, the recommendation retrieval component, and the personality and trust influence

component. These components assist the model in deriving a top-N set of recommendations

for each group member and influencing each group member’s personal ratings with trust.

The model component affected by personality is the personality and trust influence

component. Therefore, by evaluating the effect of trust and personality in a group

Page 237: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

222

recommender model, the viability of using trust and personality in these components can be

determined in the group recommender model.

In order to determine the impact of trust and personality, individually and collectively, on the group

recommender model, both the collaborative filtering-based models and the trust-based models are

evaluated. This evaluation occurs by comparing each of the four defined group models:

*-Base. This group model provides a base case without the implementation of personality or

trust.

*-Trust. This group model illustrates the average one hit match percentage when trust alone is

used within a group recommender model. This illustrates its impact on the similar and trusted

user identification, recommendation retrieval, and personality and trust influence components.

*-Personality. This group model illustrates the average one hit match percentage when

personality alone is used within a group recommender model. This illustrates its impact on the

personality and trust influence component.

*-PersonalityTrust. This group model illustrates the average one hit match percentage when

personality and trust are used within a group recommender model. This illustrates its impact

on the similar and trusted user identification, recommendation retrieval, and personality and

trust influence components

The results of this evaluation are listed for collaborative filtering-based methods as well as for trust-

based methods below.

a) Collaborative filtering-based methods

The results of evaluating the impact of personality and trust on the average one hit match

percentage are presented in Figure 13.5 below.

Figure 13.5: Collaborative filtering-based trust and personality performance

Page 238: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

223

With reference to Figure 13.5, there are a number of observations made with regards to the

average one hit match percentage.

o Generally, the base collaborative filtering model is the best performing model.

There are a number of reasons for this lack of performance in the collaborative

filtering-based models:

o The inherent differences between similarity and trust in the other

models. In the collaborative filtering-based group recommender models, the

preference elicitation process uses similarity as a basis. However, this

similarity value does not directly correlate with a relevant trust value.

Therefore, a less accurate measure is being used which results in poorer

recommendations being elicited. This hinders the results.

o The application of personality within the dataset. As was initially noted in

the section highlighting the limitations of the dataset, the personality

implementation was not always naturally intuitive.

b) Trust-based methods

The results of evaluating the impact of personality and trust on the average one hit match

percentage are presented in Figure 13.6 below.

Figure 13.6: Trust-based trust and personality performance

With reference to Figure 13.6, there are a number of observations made with regards to the

average one hit match percentage.

o There is an overall performance increase from the base model to the trust and

personality enhanced models. There are a number of reasons for this increase in

performance in the collaborative filtering-based models:

o Generally, the application of trust either reinforces the application of

trust as it was applied in the preference elicitation process or increases

the overall performance of the Trust-Base model. This is evidenced by the

Page 239: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

224

final results which show that the only two places in which trust is less than the

base is with the average with threshold aggregation model as well as the

Borda count aggregation model.

o In the trust-based models, the combination of personality and trust

results in a general increase in performance overall. This would seem to

contradict the results of the collaborative filtering models. However, this is not

necessarily the case. Recall that personality is not always intuitively applied.

However, because personality is a rating factor, the increase or decrease in

rating when making use of personality can reinforce the trust as applied within

the Trust-Base model.

c) Evaluation of results.

o Trust is a contributor for trust-based models. The greatest impact of trust is on the

preference elicitation component. This implementation results in a Trust-Base case

which is higher than any of the comparative evaluations in the collaborative filtering-

based models. The additional impact of trust alone on the trust-based models also

resulted in the same or an improved performance for the majority of the evaluations.

Exceptions to this are the average with threshold and Borda count models.

o Further experiments would have to be run to determine the factor of personality.

The results for personality were inconclusive and non-intuitive. As a result, an

additional evaluation on another dataset would have to be conducted.

o Both the application of personality and trust together did result in increased

performance of the PerTrust model when compared to the base.

13.4.3 Experiment 3: Satisfaction

One of the main objectives of a group recommender system, as per chapter 2, is ensuring the

satisfaction of a group with a group recommendation. Additionally, the measuring of satisfaction was

discussed in chapter 10 with an individual and group satisfaction function motivated. Both of these

functions were then formally defined in the satisfaction component of the PerTrust model in the

previous chapter. Therefore, the purpose of this experiment is to determine the effect on satisfaction

each defined group recommender model has, for the purpose of determining a possible most

satisfactory group recommender model.

The result of evaluating the impact on satisfaction is presented below. Figure 13.7 presents the

satisfaction impact from an individual perspective, whereas Figure 13.8 presents the satisfaction

impact from a group perspective.

Page 240: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

225

Figure 13.7: Average individual user satisfaction

Figure 13.8: Average group satisfaction

With reference to Figure 13.7 and Figure 13.8, there are a number of observations made with regards

to the impact of the defined group recommender models on satisfaction.

Trust-based methods outperform collaborative filtering-based methods. In this

experiment, the trust-based methods provide more satisfactory group recommendations from

both an individual and group perspective. This experiment illustrates the relevance of trust as

a measure of similarity. This is because trust-based methods satisfy a group member’s needs

better than collaborative filtering-based methods for this evaluation.

The Trust-PerTrust group recommender model provides the most satisfactory result. In

both evaluations, the Trust-PerTrust model performs best in terms of satisfaction. This

illustrates the benefit on satisfaction of considering both trust and personality in the group

recommender process.

Page 241: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

226

13.4.4 Summary of results

In this section, a number of model component experiments were evaluated. The purpose of these

evaluations was to determine the viability of the PerTrust model as a model suitable for group

recommendation. The main observations resulting from this evaluation are presented below.

In the overall model performance evaluation, the Trust-PerTrust model was motivated as the

most suitable for group recommendation. This observation was based upon the accuracy and

the one hit match performance of the model in generating group recommendations.

In the impact of trust and personality evaluation, it was identified that the implementation of

trust results in a better group recommender model performance. However, the results for

personality were inconclusive and non-intuitive.

In the satisfaction evaluation, the trust-based group recommender models provided more

satisfactory group recommendation results than the collaborative filtering-based ones. The

best performing group recommender model was the Trust-PerTrust model.

Therefore, based upon these three observations, it is concluded that the Trust-PerTrust model

implementation is a viable group recommender model. Additionally, it’s base and enhanced

implementation of trust results in a better overall group recommender performance. However, it is

noted that the implementation of personality in the model will require further evaluation.

In the next section, an optimum configuration is determined for the PerTrust model. This configuration

consists of defining and evaluating definitions for each of the base experiment variables defined in

section 13.3.1. The means of determining an optimum configuration is covered in the section

following.

13.5 Proposing a configuration for the PerTrust model

In this section, a final proposed PerTrust model configuration is given as a result for this dataset. In

the previous section, it was determined that the PerTrust group recommender model is a suitable

candidate group recommender model for this context, given the current dataset. However, the

purpose of this section is to determine an optimum configuration for the PerTrust model. This

optimum configuration results in the definition of three consequent variables: the N variable, as

defined in the top-N function, the similarity threshold variable, and the relevant aggregation model.

Each of these variables is defined in the subsections below.

13.5.1 Top-N and similarity configuration

In section 13.3.1, which gave background considerations, it was noted that the N variable in each top-

N function as well as the similarity threshold would be evaluated. In the evaluation of the PerTrust

Page 242: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

227

model, the N variable was set to values of 10, 20, and 30 and the similarity threshold were set to

values between 0.0 and 0.6 with increments of 0.1 between them.

With regards to the actual evaluation of this configuration, each configuration type was run across

each aggregation method for the PerTrust model. However, because of the large amount of resultant

data, the average one hit match performance for the Trust-PerTrust model is presented in Figure

13.9.

Figure 13.9: Trust-PerTrust top-N and similarity configuration

Based upon the results of this evaluation, it is evident that the best performing configuration overall is

where the N variable is set to a value of 10 and the similarity threshold is set to a value of 0.6.

13.5.2 Aggregation configuration

In this subsection, the best performing configuration for an aggregation methodology is output and

determined for the PerTrust model. The average one hit match performance result of an evaluation

done across all base configurations for each aggregation methodology is listed below in Figure 13.10.

Page 243: PerTrust: leveraging personality and trust for group recommendations

Chapter 13 PerTrust evaluation

228

Figure 13.10: Trust-PerTrust aggregation configuration

Figure 13.10 reveals that the best performing aggregation model overall is that of the plurality

aggregation algorithm. This aggregation algorithm results in a one hit rate percentage of over 70% on

average. Consequently, the plurality aggregation algorithm is nominated as the aggregation

configuration for the PerTrust model.

13.6 Conclusion

In this chapter, the evaluation of the PerTrust model was detailed with the consequent results

presented. The purpose of doing this was to determine the performance of the PerTrust model

against other group recommendation models and to evaluate whether it is a viable model for group

recommendation.

The conclusion of this chapter after the execution of multiple experiments was that the PerTrust

model is a viable candidate model for group recommendation, given the applicable dataset and

context. This conclusion is given based on the improved performance of the proposed Trust-PerTrust

model when compared against both its collaborative filtering and trust-based counterparts on the

same dataset.

In the next chapter, the PerTrust prototype is presented. This prototype presents a practical

implementation of the PerTrust model and how each of the model components can be implemented

for an online group recommender system.

Page 244: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

Page 245: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

230

14.1 Introduction

The purpose of this chapter is to demonstrate the PerTrust model by presenting the PerTrust

prototype. This PerTrust prototype is detailed by describing and explaining all the relevant

components implemented as per the PerTrust model as well as the relevant technologies used to

implement them.

The context of application for the prototype is tourism within the greater Johannesburg area.

Therefore, system users are able to sign up with the application, be assigned to a group, and request

the prototype to derive a set of possible tourist destination recommendations to visit.

In order to detail the implementation of this prototype, this chapter is structured as follows. Firstly, a

background is given as to the development of the prototype. Thereafter, class definitions are listed for

each of the defined PerTrust model components. Lastly, the chapter is concluded.

14.2 Background

The PerTrust prototype is an online single page web application. This web application follows a

standard client server architecture. The client part of the application was developed in Microsoft

Visual Studio 2012 Professional Edition (Microsoft, 2014a) and was built upon the Twitter Bootstrap 3

Framework (Boostrap, n.d.). The Twitter Bootstrap 3 Framework is an HTML 5 and CSS 3.0 based

framework for web frontend developers (Boostrap, n.d.). There are a number of reasons why this

framework was chosen.

1. The Twitter Bootstrap 3 Framework is built to accommodate the frontend developer as much

as possible. Therefore, many of the CSS configurations for web controls such as buttons and

tabs, for example, have already been defined (Boostrap, n.d.).

2. The Twitter Bootstrap 3 Framework has built-in support for responsive design (Boostrap,

n.d.). Responsive design refers to the ability of a web page to adapt to the device viewing the

application (De Graeve, 2011). This aspect of design is becoming increasingly important as

more people make use of smartphones and tablets. Normally, a web designer would have to

manually cater for each type of device. However, with the Twitter Bootstrap 3 Framework, this

is automatically catered for. An example of this design implementation in the PerTrust

prototype is shown in Figure 14.1 below.

Page 246: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

231

Figure 14.1: PerTrust responsive web design layout (Laptop – top; Tablet – bottom left; Smartphone –

bottom right)

Outside of the Twitter Bootstrap 3 component, there are two other considerations with regards to the

client.

1. The implementation of logic on the page itself. In the PerTrust prototype, all of the web page

logic is implemented by making use of client-side scripting. The technologies used for this

scripting are JavaScript as well as the jQuery version 1.10.2 library (Dmethvin, 2013).

2. The handling of content between the client and the web server. This content management is

done through a generic C# handler class which calls the relevant service and returns the

result back to the page.

Page 247: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

232

With regards to the server component in the PerTrust component, all server interactions are

developed as an ASP.NET WCF web service written in C# (Microsoft, 2014b). Therefore, a main web

service is exposed on a web server. The client then consumes the service and makes use of

whichever methods are relevant for the particular functionality. In terms of the structure of the server

component, the main service class implements a defined interface. This main service class then calls

and manages the necessary functionality by calling other classes and performing the required logic

before returning a result back to the caller.

With regards to the database configuration, the web server makes use of the Entity Framework

(Microsoft, 2013) as a means to handle the various interactions between the web server and the

database. The database, as presented by the Entity Framework model (Microsoft, 2013) is presented

in Figure 14.2.

Figure 14.2: PerTrust database – Entity framework

A brief table definition of the PerTrust database is given below.

User. Stores all basic user and login information.

TouristAttraction. Stores a list of all tourist attractions.

UserTKIScore. Stores a system user’s TKI score per personality type.

UserUserTrustScore. Stores the explicit trust scores attributed between users.

UserTouristAttractionScore. Stores the explicit rating scores attributed by users to tourist

attractions.

Page 248: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

233

Group. Stores a particular group of users. Each individual group is identified by the

GroupRepresentativeId column.

GroupPreference. Stores a list of the group preferences.

TKIScoreResult. Stores data to interpret a TKI score as high or low.

ConflictModeWeight. Stores data to determine a relative assertiveness and cooperativeness

score based on a high or low TKI personality type.

Figure 14.2 shows all the different tables within the database as well as how each one links to the

other. The relevance of each of these tables within the PerTrust prototype is discussed in further

detail in later sections. However, it is presented here as on overview of the database for the PerTrust

prototype web server component.

Next, the specific implementation of the PerTrust model and the various processes involved are

discussed in further detail.

14.3 Class implementation

This section presents a detailed discussion with regards to the implementation of the PerTrust

prototype. In particular, this discussion focuses on the various class implementations for each

relevant component of the PerTrust prototype. As a result, this section is explained with reference to

the interface and how the logic on the interface has been implemented.

Therefore, this section is split up into two main sections.

1. The first section details the registration service model component.

2. The second section details the group recommendation model components. This includes both

the preference elicitation component as well as the aggregation component.

Both of these sections model components are detailed in the sections following.

14.3.1 Registration components

In the PerTrust prototype, there are four main registration components.

1. Basic information component

2. Personality information component

3. Social relations information component

4. Rating history information component

Therefore, each of the sections following discusses a component detailing how it is catered for in the

PerTrust prototype.

Page 249: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

234

a) Basic information component

The basic information component in the PerTrust prototype allows a user to capture their

basic details. The reason for this component information is to uniquely identify a system user.

The basic information component, as it is implemented in the PerTrust prototype, is shown in

Figure 14.3 below.

Figure 14.3: PerTrust prototype – Basic details component

The methodology followed by the PerTrust prototype in capturing the details of this

component is shown in the class diagram in Figure 14.4 below.

Figure 14.4: Class diagram – Basic details component functionality

The relevant method used for the implementation of capturing user details is as follows.

AddUser(). The AddUser() method takes as input all of the basic information

captured by the user. This data is then passed on to the database where it is

captured in the User table. Thereafter, the id of the newly created user is returned to

the application where it is stored for the other registration service processes.

Page 250: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

235

b) Personality information component

In the PerTrust prototype, the purpose of the personality information component is to acquire

a user’s dominant personality types. As per the PerTrust model definition, this is done through

the Thomas-Kilman Instrument (TKI) test and the answering of 30 multiple choice, A or B,

questions. The first tab of the TKI test in the PerTrust prototype is shown in Figure 14.5

below.

Figure 14.5: PerTrust prototype – Personality information component - TKI test

Once a user has completed the TKI test, the PerTrust prototype outputs a user’s dominant

personality type for their information. This dominant personality is determined by evaluating

how each question has been answered. For example, in Figure 14.5, question one, option A

indicates an avoiding personality type, while option B indicates an accommodating personality

type. Similarly, for question two, option A indicates a compromising personality type and

option B indicates a collaborating personality type.

After every question has been answered, a dominant personality is determined by totalling the

number of selection selections for each personality type. Those that exceed the specified

threshold for each personality type, as specified in the PerTrust model definition, are then

considered as the dominant personality types for the user. Thereafter, the dominant

personality or personalities are output as per Figure 14.6 below.

Page 251: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

236

Figure 14.6: PerTrust prototype – Personality information component - Personality type

From an implementation perspective, the methodology followed within the PerTrust prototype

to capture these personality results is presented in the class diagram in Figure 14.7 below.

Figure 14.7: Class diagram – Personality information component functionality

The relevant method used for the implementation of capturing user details is as follows:

AddUserTkiResults(). The purpose of this method in the PerTrust prototype is to

capture the user’s results after they have taken the TKI test. The reason for storing

this data is because it is used as a factor to determine a group-based rating

prediction for group recommendation.

As input, this method takes the total score for each defined personality type.

Therefore, the competing, collaborating, compromising, avoiding, and

accommodation personality types each have their own scores. These scores are then

captured in the database in the UserTkiScore table. What is returned from the

database is an integer with 1 indicating that the data was saved successfully or a 0

indicating that the data was not saved successfully.

Page 252: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

237

c) Social relations information component

In this component of the PerTrust prototype, a user is required to explicitly assign trust ratings

to those system users they know. Each user is consequently presented with a potential trust

rating between one and ten, with ten being the highest and one being the lowest. The tab for

capturing this data in the PerTrust prototype is shown in Figure 14.8 below.

Figure 14.8: PerTrust prototype – Social relations information component

In the PerTrust prototype, the methodology user to capture social relations data is as per the

class diagram in Figure 14.9 below.

Figure 14.9: Class diagram – Social relations component functionality

The relevant method used for the implementation of capturing user details is as follows:

AddUserUserTrustScores(). This method captures the trust valuations attributed by

the current user to every other known registered user. This data is persisted in the

Page 253: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

238

database in the UserUserTrustScores table and represents a user’s explicit trust

network. If this data is successfully saved, a value of 1 is returned. If not, then a value

of 0 is returned.

With regards to the input of this method, it accepts a JSON (JSON, n.d.) string.

JSON, JavaScript Object Notation, is a lightweight JavaScript object structured

understood by both the client and the server (JSON, n.d.). An example of a JSON

string, as implemented in the PerTrust model, is shown in Figure 14.10 below.

Figure 14.10: PerTrust prototype – Social relations component functionality – JSON Notation

As can be seen from Figure 14.9, JSON has a hierarchical structure and is almost

like an object with numerous properties which can be set (JSON, n.d.). There are a

number of implementation reasons for making use of JSON.

JSON is lightweight. In essence, it is just a string, so it can be easily used

and processed as a parameter.

It is dynamic. Any object can be represented by a JSON string.

JSON can be processed on both the server and client side. On the client

side, JSON is supported by the jQuery library. Therefore, it is easy for a

JSON object to be serialised and deserialised. From a server perspective,

JSON can also be parsed. In the PerTrust prototype, this has been done by

making use of the JSON.NET (Newton-King, n.d.) library, an open source

.NET library for processing JSON strings. This class is commonly used to

process JSON strings into .NET objects of similar structure.

GetAllSystemUsers(). This method retrieves a list of all registered system users and

returns them to the client so that trust scores can be attributed to them. This list of

users is retrieved from the User table in the database. As with the

AddUserUserTrustScores() method, this method receives a JSON string back from

the server. This JSON string is then parsed, processed, and presented on the

interface.

Page 254: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

239

d) Rating history information component

The last component required for registration within the PerTrust prototype is that of rating

history information component. This component requires that a user goes through a list of a

62 locations and attribute a rating between one and five stars to the ones they have been to.

Upon completion, a user submits these ratings. The tab for a user’s rating history is presented

in Figure 14.11 below.

Figure 14.11: PerTrust prototype – Rating history information component

The implementation followed for the capturing of a user’s rating history is given in the class

diagram in Figure 14.12 below.

Figure 14.12: Class diagram – Rating history information component functionality

The relevant method used for the implementation of capturing user details is as follows:

AddUserTouristAttractionScores(). This method is similar in implementation to the

social relations component whereby JSON strings are used to persist rating

information in the database. The database table in which this information is persisted

is the UserTouristAttractionScores table. Therefore, this method takes as input, a list

of all those tourist attractions rated by the source user. If this is saved successfully, a

value of 1 is returned. If not, then a value of 0 is returned.

Page 255: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

240

Once a user has completed each component in the PerTrust prototype, they are deemed to

have been registered on the PerTrust system.

14.3.2 Group Recommendation

In the PerTrust prototype, the group recommendation process begins when a group is formed. As a

result, the PerTrust prototype caters for a group recommendation through the user selecting the group

recommendation functionality. It is at this point, that a single group member is nominated as the group

administrator. It is the responsibility of the group administrator to physically add all members within

the group. The interface through which this is done is presented in Figure 14.13 below.

Figure 14.13: PerTrust prototype – Group recommendation – Adding group members

Once this is done a user goes to the “Step 2 – Group Recommendation Results” tab and presses the

button to generate a group recommendation. This triggers the group recommendation process. The

methodology to begin this process is illustrated in Figure 14.14 below.

Page 256: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

241

Figure 14.14: Class diagram – Generate group recommendation

The relevant method used for the implementation of capturing user details is as follows:

GenerateGroupRecommendation(). This method manages the group recommendation

process. As input, it takes a list of all the members of the group. Once completed, the final

group recommendation is returned. Therefore, the core responsibility of this method is to

manage the group recommendation processes of preference elicitation and aggregation.

Both of the preference elicitation and aggregation components, as implemented within the PerTrust

prototype are given below.

a) Preference elicitation component

As detailed in the PerTrust model, the preference elicitation component comprises of four main

subcomponents:

1. Registered user retrieval component

2. Similar and trusted user identification component

3. Recommendation retrieval component

4. Top-N recommendation component

The preference elicitation component in the PerTrust prototype is managed in the preference

elicitation service class. This class definition is presented in Figure 14.15 below.

Page 257: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

242

Figure 14.15: Class diagram – Preference elicitation component

In line with the preference elicitation model definition, each subcomponent is discussed with reference

to the PerTrust prototype implementation.

i. Registered user retrieval component

In the registered user retrieval component, a list of all registered users is retrieved so that a

group can be formed. This component was implemented above before the

GenerateGroupRecommendation() method was called.

ii. Similar and trusted user identification component

This component retrieves a list of all system users calculated to be trusted and similar to a

particular group member. The relevant methods for this component implementation are listed

as follows:

Page 258: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

243

o GetTrustedUserList(). This method retrieves a list of trusted users for a group

member, so that a trust-based list of preferences can be determined for them. This

list is retrieved by calling the GetUserTrustData() method in the MoleTrustNetwork

class. This method is defined below.

GetUserTrustData(). This method forms a trust network for a specific user

by combining both explicit trust relationships as well as inferred trust

relationships. The result is a list of all trusted users for a given source user, in

this case the group member, each with an associated trust score.

This method is implemented by querying the UserUserTrustScores table in

the PerTrust database for a list of all users explicitly trusted by the source

user and above the predefined trust score threshold. These users are

considered to be the first level of users. Thereafter, the database table is

queried again for all those users explicitly trusted by the first level of users.

Again, these users must have trust scores above the threshold value. These

users are the second level of users. These lists of first level and second level

users are returned with their associated trust scores. Once these lists have

been retrieved, the method infers a trust rating for the second level of users

by using the MoleTrust algorithm.

o GetSimilarityUserList(). This method works similarly to the GetTrustedUserList()

method. However, instead of retrieving a top-N list of trusted users, a top-N list of

similar users is retrieved. This list is retrieved by calling the GetUserCorrelationData()

method in the CollaborativeFiltering class. This method is detailed below.

GetUserCorrelationData(). The purpose of this method is to determine the

similarity value between the source user and multiple other system users.

This is achieved by following a two-step process.

1. The UserTouristAttractionScores database table is referenced. This

table is queried for all tourist attractions not already rated by the

source user and having a rating score above a specified threshold.

2. The resultant list is used as a basis to determine similarity between

the source user and each user contained in the list. This calculation

is done in the CalculatePearsonCorrelation() method where the

Pearson correlation coefficient is determined between two users

based on their respective rating histories. Once this has been done

for each user, all users whose similarity values are below a

predefined threshold are filtered out from the list. This final similarity

user list is returned.

Page 259: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

244

iii. Recommendation retrieval component

This component derives a top-N list of recommendations for each group member based on

their similar and trusted user list. The relevant methods for this component implementation

are as follows:

o GetUserRatingHistoryList(). This method retrieves a list of all recommendation

items already rated by the source user. The relevance of this method is to ensure that

none of the top-N preference lists returned contains those recommendation items

already rated by the user. This data is obtained from the database in the

UserTouristAttractionScores table.

o GetUserRatingHistoryList(). This method retrieves a list of all recommendation

items already rated by the source user. The application of this method is to ensure

that none of the top-N preference lists returned contains those recommendation items

already rated by the user. This data is obtained from the database in the

UserTouristAttractionScores table.

o GetTrustedUserTopNPreferences(). This method retrieves a list of top-N trust-

based preferences for the relevant group member. This top-N preference list is

obtained by calling the GetTrustedUserTopNPreferences() method. This method is

detailed below.

GetTrustedUserTopNPreferences(). The purpose of this method is to return

a top-N list of trust-based rating preferences for the user. This is again

implemented in two steps.

1. The list of trusted users obtained previously is used as the basis to

reference the UserTouristAttractionScores database table for the

trusted user’s top-N preferences. This top-N list is determined by

multiplying the trust score by the rating score attributed by the trusted

user. This list is then ordered from highest to lowest with the top-N

results being returned from the database.

2. The consequent list of top-N preferences is then used as a basis to

determine a predicted rating for the source user. Consequently, this

method cycles through each returned record and calls the

GetEnsembleTrustCFRating method in the EnsembleTrustCF class.

This method calculates a trust-based rating prediction by making use

of the EnsembleTrustCF algorithm. It is this final list which is

returned.

o GetSimilarUserTopNPreferences(). This method is similar to the

GetTrustedUserTopNPreferences() defined above with the exception that top-N list of

similarity based preferences are returned instead. The method used to retrieve this

Page 260: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

245

top-N similarity based list is the GetSimilarityUserTopNPreferences() method. This

method is detailed below.

GetSimilarityUserTopNPreferences(). This method works precisely the

same way as the GetTrustedUserTopNPreferences() method, with two main

differences. In the first difference, when the UserTouristAttractionScores is

referenced, the rating score is affected with similarity instead of trust. The

second difference is that a predicted rating is determined for each top-N

preference by making use of the GetCollaborativeFilteringRating() method.

Outside of these two differences, the methodology followed is precisely the

same. This list of similarity based users is then returned.

o GetIndividualPreferencesEnsembleTrustCF(). This is the main method which

manages the preference elicitation process in the PerTrust prototype. As input the

system takes in a single group member, the number of top-N preferences to return,

as well as the similarity threshold. The purpose of this method is to elicit a top-N set

of preferences for each group member for the purposes of generating a group

recommendation.

iv. Top-N recommendation component

In this component, a list of top-N preferences is determined through the implementation of two

steps.

The trust-based preference list and the similarity based preference list is merged

together and sorted from the highest rating score to the lowest rating score.

The top-N preferences are obtained from this list. The top-N preferences are

determined by taking the highest rated tourist attraction and then removing any

duplicates in the merged list. Thereafter, a similar process is followed till N

preferences are reached. This final list is then returned as the top-N preferences for a

group member.

b) Aggregation component

As detailed in the PerTrust model, the aggregation component comprises of four main

subcomponents:

1. Rating matrix formation component

2. Personality and trust influence component

3. Aggregation model component

4. Satisfaction component

The aggregation process in the PerTrust prototype is managed in the Preference Elicitation Service

class. This class definition is presented in Figure 14.16 below.

Page 261: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

246

Figure 14.16: Class diagram – Aggregation

In line with the preference elicitation model definition, each subcomponent is discussed with reference

to the PerTrust prototype implementation. It is noted that each of these components are initiated via

Page 262: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

247

the GetTopNGroupPreferences() method. This is the method which initiates and manages the final

group recommendation process.

i. Rating matrix formation component

In this component, the top-N recommendation lists of each individual group member are

merged together to form a rating matrix. This rating matrix contains a list of all top-N tourist

attraction preferences for each group member as well as each group member’s

corresponding rating score for that particular tourist attraction. In order to obtain the rating

score for each tourist attraction, the prototype determines one of two possible scenarios.

The first scenario is that the user has already been to a particular location. The

prototype verifies this by obtaining the rating history of the group member and

determining whether the particular tourist attraction is included within their rating

history. This rating history is obtained by calling the GetUserRatingHistoryList()

method.

The second scenario is that the user has never been to that location and a rating

needs to be determined for them. Should this be the case, a rating score is calculated

by the PerTrust prototype by calling the GetEnsembleTrustCFRating() method. This

method determines a trust-based rating prediction for a given tourist attraction and

user.

ii. Personality and trust influence component

In this component, the rating matrix formed for the group is affected with both personality and

trust. However, before each group member’s rating scores can be affected with personality

and trust, a number of prerequisite requirements are needed. These prerequisites entail

obtaining a list of both trusted and similar users. This is done by calling the

GetTrustedUserList() and GetSimilarUserList() methods in the PreferenceElicitationService

class. Once both of these lists have been retrieved, they are passed as input to the

GetGroupPersonalityTrustAffectedRatings() method. This method is detailed below.

o GetGroupPersonalityTrustAffectedRatings(). The purpose of the method is to

determine a group-based rating prediction for each group member for each tourist

attraction. This is implemented by using the delegation based rating prediction

algorithm (Quijano-Sanchez et al., 2013). This algorithm makes use of the trust and

similarity values in the respective lists input into the method as well as the conflict

mode weight value to determine a group-based rating prediction. The conflict weight

mode (CMW) data (Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009) for the

group is determined by making use of the GetConflictModeWeightValue() method in

the ConflictModeWeight class. The final group-based rating prediction is the output of

this method.

iii. Aggregation model component

Page 263: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

248

In this component, the aggregation method is executed to determine a group

recommendation. As identified in the evaluation chapter, the plurality voting aggregation

model is the nominated aggregation model to be implemented in the PerTrust prototype.

Therefore, the method to perform this aggregation is the GetPluralityVotingAggregation()

method. This method is detailed below.

o GetPluralityVotingAggregation(). The purpose of this method is to apply the

plurality voting aggregation model to a rating matrix. The result of the application of

this aggregation model is a final list of preferences for a group. This list of

preferences is the output of this method.

iv. Satisfaction component

In this component, the satisfaction scores of both the group as well as the individual for the

given group recommendation is calculated. This is determined by calling the

GetGroupSatisfactionScores() method. This method returns a satisfaction score for the group

as well as for each group member individually. As per the model definition, individual scores

are calculated based on Masthoff’s (2004) individual satisfaction function, whereas group

recommendation is determined by making use of Quijano-Sanchez et al.’s (2013) group

satisfaction function.

The final presentation of the group recommendations in the PerTrust prototype is as per Figure 14.17

below.

Figure 14.17: PerTrust prototype – Group recommendation – Final group recommendation

Page 264: PerTrust: leveraging personality and trust for group recommendations

Chapter 14 PerTrust model prototype

249

14.4 Conclusion

In this chapter, the PerTrust prototype was presented and detailed from both a visual perspective as

well as an implementation perspective. A background was initially given with regards to the

technologies used as well as the client and server components. Thereafter, the prototype was

evaluated with reference to the PerTrust model in detailing all the various classes used in the

implementation of the prototype explained.

In the next chapter, this research is concluded.

Page 265: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

Page 266: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

251

15.1 Introduction

This research presents a personality and trust-based group recommender model, PerTrust. The focus

of this dissertation is how a group recommender system can make use of trust and personality to

determine group recommendations. Therefore, a framework for implementing trust in recommender

systems was motivated. The defined trust framework uses the MoleTrust algorithm to leverage trust

and calculate trust between strangers and the EnsembleTrustCF algorithm to determine a trust-based

predicted rating for a recommendation item.

Additionally, a number of group processes were presented. Specifically, the preference elicitation,

rating prediction, aggregation, and satisfaction processes were detailed and explained.

The preference elicitation process presented the implementation of trust in determining a top-

N list of recommendations for each group member.

The rating prediction process focused on the implementation of personality and trust in

affecting an individual’s personal rating scores for recommendations with the social influences

of the group. As a result, a personality implementation framework was defined through the

Thomas-Kilman Instrument (TKI) personality test and the conflict mode weight (CMW) value

(Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009). The delegation-based rating

prediction algorithm, by Quijano-Sanchez et al. (2013), implements trust and personality for a

predicted rating.

The aggregation process focused on the various aggregation models used to aggregate the

top-N personality and trust affected recommendation lists of each group member. After an

evaluation, the plurality voting model was nominated as the candidate aggregation algorithm

for the PerTrust model.

The satisfaction process focused on the measurement of both individual and group

satisfaction. Masthoff’s (2004) satisfaction function as well as Quijano-Sanchez’s (2013)

group satisfaction function were motivated as the candidate individual and group satisfaction

functions for this research.

Thereafter, the proposed architecture, model, evaluation, and prototype presented the viability of

implementing trust and personality within a group recommender model.

In this chapter, the conclusion begins with a review of the research objectives defined in chapter 1.

Next, the contributions of this research are presented. Thereafter, the limitations of this research are

considered. Lastly, areas of further research are examined.

15.2 Reviewing the research objectives

The purpose of this dissertation is to present a generic group recommender model which considers

and implements both trust and personality. Therefore, in order to assess whether the proposed model

Page 267: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

252

achieves this, the model is evaluated against the research objectives in chapter 1. Each of these

questions are evaluated below.

a) What are the specific requirements of group recommender systems which have to be

considered?

The requirements for a trust-based group recommender system were identified in chapter 2.

These identified requirements are presented below.

Satisfaction. A group recommender system must calculate satisfactory

recommendations and be able to measure satisfaction amongst individual group

members individually and the group collectively.

Implementation of social factors. A group recommender system must consider the

social influences of the group when calculating a group recommendation. By doing

this, the needs of the group are considered.

Inference of trust. A group recommender system must consider how trust can be

determined between two users who are strangers.

Implementation of trust. A group recommender system must consider the trust

relationships within a group context as well as outside of the group context. This aids

the system in leveraging trust and determining more accurate group

recommendations.

Generic implementation. The group recommender model must be generic to any

group recommendation context.

b) How do current group recommendation models aim to meet these requirements?

Given the list of requirements for group recommender systems, there are many different

methodologies followed in catering for these requirements.

Satisfaction. Generally, group recommender systems attempt to cater for

satisfaction through the application of an appropriate aggregation model. However, it

is noted this approach does not often consider the specific social influence of each

individual system user (Chen et al., 2008; Quijano-Sanchez et al., 2013). Other

recommender systems additionally consider social factors, such as those presented

in chapter 8.

In terms of measuring satisfaction, chapter 10 detailed how different satisfaction

functions are used to measure both individual and group satisfaction in group

recommender systems. In this chapter, four individual satisfaction functions were

presented.

o Expected search length (ESL) measure (Quijano-Sanchez et al., 2013).

Satisfaction is measured upon how high up the group recommendations

appear in a user’s personal list of recommendation items. The higher up the

group recommendations are, the more satisfied a user is calculated to be.

Page 268: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

253

o Satisfaction measure by Carvalho and Macedo (2013). Satisfaction is

measured based upon an average of a user’s personal rating scores for each

group recommendation.

o Mean absolute error (MAE) measure by Garcia et al. (2012). Satisfaction

is the mean average difference between the group member’s personal rating

scores and the rating scores returned by the group recommendation.

o Masthoff’s (2004) individual satisfaction function. Satisfaction is

measured upon how closely matched a group member’s personal explicit or

inferred ratings are for a given list of group recommendations, in comparison

to the maximum user rating scores per group recommendation. This

definition is extended to cater for linearity.

Additionally, two group satisfaction functions were presented.

o Averaging individual satisfaction (Bourke et al., 2011; Carvalho &

Macedo, 2013; Garcia et al., 2012; Kim et al., 2010). Group satisfaction is

measured as the average of each group member’s individual satisfaction

score.

o Group satisfaction measure (Quijano-Sanchez et al., 2013). Group

satisfaction is measured as the standard deviation between the individual

satisfaction of each individual group member and the average satisfaction

measure for the group as a whole.

Implementation of social factors. In chapter 8, a personality implementation

framework was defined. This chapter reviewed a number of group recommender

methods for implementing social factors.

o Social descriptors (Gartrell et al., 2010). Social factors are considered by

measuring how close the relationship is between two system users. Based on

this result, a suitable aggregation model is applied thereafter.

o Social influences by Bourke et al. (2011). Social factors are considered as

a weighting. A number of social based influences, such as similarity, are used

to calculate a suitable weighting representative of the social considerations in

the group.

o Social weighting approach by Berkovsky and Freyne (2010). Social

factors are considered as weightings based on social roles, how engaged a

group member is with other group members in the same social role, and how

engaged a group member is within their family unit.

o The TKI test (Quijano-Sanchez et al., 2013). Social factors are considered

by completing a personality test, the TKI test, and then using the resultant

personality type to determine a conflict mode weight (CMW). This CMW

value is used as a factor in calculating a group recommendation.

Page 269: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

254

Inference of trust. In chapter 5, five trust-based algorithms were presented. Four of

these trust-based algorithms infer a trust value between strangers.

o TidalTrust (Golbeck, 2005). Trust is inferred by propagating explicit trust

values along a trust network path between users. The shortest paths are

used with the highest trust ratings. The product of these trust values

represents the inferred trust value.

o MoleTrust (Massa & Avesani, 2007a). Trust is inferred in a similar way to

the TidalTrust algorithm. However, improvements were made to this

algorithm in the form of a trust propagation horizon and a trust threshold

value. The horizon ensures that trust never goes too far out. The threshold

value ensures that each trust value along the trust network is only considered

if it is above the threshold value.

o Profile and item level trust (O’Donovan & Smyth, 2005). Profile trust is

inferred by considering the number of correct ratings scores versus the

number of incorrect rating scores. Item level trust is inferred by considering

the number of correct ratings scores versus the number of incorrect rating

scores for a specific recommendation item.

o Structural trust inference (O’Doherty, 2012). Trust is inferred by making

use of a bipartite graph and analysing the commonly rated items which link

users. The inferred level of trust is based on the Jaccard index as well as the

popularity measure.

Implementation of trust. In chapter 5, five trust-based algorithms were presented.

Each of these trust-based algorithms implements trust in different ways and are

adopted within trust-based recommender systems. These algorithms were evaluated

according to a set of requirements as well as empirically. The four algorithms that

implement trust to calculate recommendations are as follows.

o Trust-based weighted mean (Golbeck, 2005; Golbeck & Hendler, 2006;

Victor, 2010). In this algorithm, a rating prediction is determined for a system

recommendation by making use of an inferred or explicit trust value. Inferred

trust values are calculated through the TidalTrust algorithm. This trust value

both rates and normalizes the returned rating.

o Trust-based collaborative filtering (Massa & Avesani, 2007a; Victor,

2010). This algorithm is similar to the standard collaborative filtering

algorithm, except that the similarity measure is replaced with a trust value.

Inferred trust values are calculated through the MoleTrust algorithm.

o Trust-based filtering (O’Donovan & Smyth, 2005; Victor, 2010). In this

algorithm, trust is used as a means to exclude all untrusted users from a

rating prediction calculation.

Page 270: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

255

o EnsembleTrustCF (Victor, 2010). In this algorithm, trust is used with

similarity as the primary factor for determining a group recommendation.

Generic implementation. Typically, most group recommender models are generic in

implementation as recommendation items can be numerous things such as books,

movies, or tourist attractions. Therefore, a group recommendation can typically be

determined outside of the specifics of a recommendation item, making it generic.

c) How does the PerTrust prototype meet the specific requirements of group

recommendation?

The PerTrust model has catered for the specific requirements in the following way.

Satisfaction. Chapters 7, 8, 9, and 10 detailed a group recommendation process in

order to ensure satisfactory trust and personality based recommendations. The

attempts to implement satisfaction in these chapters are detailed below.

o Implementation of trust. Because trust is a personal measure of similarity

(Golbeck, 2005; Quijano-Sanchez et al., 2013), it is implemented in both the

preference elicitation process as well as the rating prediction process for

group recommendation. These implementations are detailed in chapters 7

and 8. The evaluation in chapter 13 concluded the positive impact of this

approach.

o Implementation of personality. To ensure that each user’s social

preferences and personalities are considered in a group recommendation,

personality was implemented as part of the group recommendation process.

This implementation is detailed in chapter 8. Although inconclusive results

were concluded in the evaluation of the group recommender model in chapter

13, the model does attempt to cater for this concern.

o Implementation of a relevant aggregation model. In order to identify an

aggregation model which is fair and considerate of other group members,

every possible aggregation model was evaluated in the PerTrust model in

chapter 13. The result of this evaluation was that the plurality voting

aggregation model was nominated as the candidate aggregation model for

the PerTrust model.

Additionally, chapter 9 motivated a candidate individual and group satisfaction

measurement function. For individual satisfaction, Masthoff’s (2004) individual

satisfaction was selected. For group satisfaction, Quijano-Sanchez et al.’s (2013)

group satisfaction measure was selected.

Implementation of social factors. In chapter 8, the TKI test approach, as proposed

by Quijano-Sanchez et al. (2013) and Recio-Garcıa et al. (2009) was motivated as

the implementation to determine the personality of each group member. Based on the

Page 271: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

256

personality type, a conflict mode weight (CMW) value is used as a numerical

weighting for personality. This is then implemented in the motivated delegation-based

rating prediction algorithm (Quijano-Sanchez et al., 2013) to affect ratings with

personality as well as trust.

Inference of trust. In chapters 5 and 6, the MoleTrust algorithm by Massa and

Avesani (2007a) was evaluated and chosen as the method of inferring trust.

Implementation of trust. In chapters 5 and 6, the EnsembleTrustCF algorithm by

Victor (2010) was evaluated and chosen as the method for implementing trust in the

group recommendation process.

Generic implementation. The PerTrust model is not linked or dependent upon a

specific context. This makes it generic as the components presented in the PerTrust

model can be used in any implementation.

d) How can trust and personality be implemented within the process of group

recommendation?

The implementation of trust and personality within the group recommendation process was

detailed in Chapter 8 on rating prediction. In this chapter, the delegation-based rating

prediction algorithm, as defined by Quijano-Sanchez et al. (2013) was selected as the means

of using both trust and personality to influence a group member’s rating for a specific

recommendation item.

e) How can trust be determined between two system users who do not know one

another?

As per the recommender system objective to infer trust, trust is calculated in the PerTrust

model by making use of the MoleTrust algorithm (Massa & Avesani, 2007a). This algorithm

was evaluated and motivated in chapters 5 and 6.

f) What is the effect of trust and personality in the group recommendation process?

The effect of trust and personality within the PerTrust model was evaluated in chapter 13. The

conclusions are presented below.

The implementation of trust results in an increased group recommender model

performance. Specifically, the implementation of trust as part of the preference

elicitation model component resulted in a better group recommender model

performance in comparison to the collaborative filtering-based group recommender

models. Additionally, it was noted that there was generally a level or increased

performance with the implementation of trust in the trust-based group recommender

models. Therefore, the personality and trust influence component benefits from the

trust implementation.

The implementation of personality was inconclusive. For this particular

evaluation, the implementation of personality was not conclusive. Personality

Page 272: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

257

worsened the results in the collaborative-filtering based models, but performed better

for the trust-based models. In Quijano-Sanchez’s (2013) evaluation, it was concluded

and presented that personality did assist in their group recommender model.

Therefore, further evaluations would have to be conducted outside of the scope of

this research.

g) How can the satisfaction with a group recommendation be determined both

individually and collectively as a group?

As per the satisfaction requirement in the list of group recommender objectives, individual and

group satisfaction was discussed in chapter 9. In this chapter Masthoff’s (2004) individual

satisfaction was selected for individual satisfaction. For group satisfaction, Quijano-Sanchez

et al.’s (2013) group satisfaction measure was selected.

15.3 Research contributions

In this section, the contributions of the PerTrust model are presented. These contributions relate to

aspects of research either not identified or aspects which are limited in research. The contributions of

the model are as follows:

a) Implementation of a trust-based preference elicitation process for the purposes of

group recommendation.

This is one of the most important contributions of the PerTrust model. Most other group

recommender models make use of collaborative filtering to elicit preferences for group

members (Baltrunas et al., 2010; Cantador & Castells, 2012; Kim et al., 2010; Masthoff, 2011;

Pera & Ng, 2012). However, as detailed in Chapter 2, collaborative filtering has a number of

inherent weaknesses, especially for sparse datasets (Massa & Avesani, 2009; Quan & Hinze,

2008). By making use of trust, the inherent weaknesses of collaborative filtering can be

overcome.

As a result, through the application of trust-based methods in the preference elicitation

process, the PerTrust model is able to derive more personalised and reliable

recommendations for each group member. Additionally, this results in a better group

recommendation result because the inputs from the preference elicitation process are better

(Quijano-Sanchez et al., 2013).

b) Combination of traditional trust-based methods with personality.

Another of the most important contributions of the PerTrust model is the application of both

trust and personality in the group recommendation process. There are few group

recommender system models which directly cater for the social influences and relationships

within a group. Moreover, to the knowledge of the author, the only group recommender

system which implements both personality and trust is that of Quijano-Sanchez et al. (2013).

Page 273: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

258

Therefore, the contribution of the PerTrust model is that it caters for personality, through the

application of the CMW value (Quijano-Sanchez et al., 2013; Recio-Garcıa et al., 2009), and

trust, through the application of the EnsembleTrustCF (Victor, 2010) algorithm.

The unique contribution of the PerTrust model is the application of trust through traditional

trust-based methods as opposed to the approach taken by Quijano-Sanchez et al. (2013) who

infers trust through a number of Facebook-based factors. This approach was not used for two

reasons. Firstly, this method it is limited in context, since it requires a Facebook profile and is

only restricted to one’s immediate social contacts. Secondly, it is more unreliable, to an

extent, as trust values are inferred based on Facebook factors and not explicit trust values. As

a result, a traditional, trust-based method was adopted for the PerTrust model.

c) Evaluation with all aggregation models.

In this research, an evaluation of the PerTrust model was done with reference to all defined

aggregation models. Typically, research on group recommendation is only done with one or a

number of these aggregation models. It is noted, however, that this was identified as a point

of future research in the work by Quijano-Sanchez et al. (2013).

d) Comparative analysis of trust-based algorithms with reference to a single scenario

application.

Many researchers introduce and discuss trust-based algorithms without consideration of the

lay person. Therefore, a contribution of this research is chapter 5 whereby each trust-based

algorithm is detailed and evaluated with reference to a single scenario application.

15.4 Limitations of the research

In this section, a number of limitations with regards to this research are presented. These research

limitations can be overcome through further research. The research limitations identified are the cold

start issue, the accuracy of trust-based methods, the implementation of personality, consensus, and

the implementation of memory.

a) Cold start issue

While the implementation of trust in the PerTrust model does cater for the cold start issue to

some extent through its application of trust, the model suffers in those situations where there

are new users. In these situations, it is impossible to determine a personalised list of

preferences for such a user until they have either issued a trust statement or rated a number

of items. There are means to overcoming this issue. However, this is discussed in the next

section since it is an area for further research.

Page 274: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

259

b) Accuracy of trust-based methods

In Victor’s (2010) evaluation of trust-based methods, presented in chapter 6 of this research,

it was noted that in densely populated datasets, there is little difference in accuracy between

a collaborative filtering-based recommendation and a trust-based recommendation. As a

result, further research would have to be done to evaluate other methods for calculating and

leveraging trust in recommender systems.

c) Implementation of personality

In the conclusion of this research, it was noted in the evaluation of the PerTrust model that

the effect of personality was inconclusive. While a similar method of implementation of

personality in Quijano-Sanchez et al.’s (2013) evaluation showed more conclusive results, the

implementation of personality in the PerTrust model would have to be evaluated further.

d) Explanation and consensus

In this research, a number of methods were used to obtain consensus about a group

recommendation: the implementation of trust and personality, trust-based preference

elicitation, evaluating a relevant aggregation model, and implementing a method to calculate

individual and group satisfaction. All of these considerations contribute, but are not fully

focused on consensus. Additionally, explanations were not considered in the PerTrust model.

Therefore, the group recommendation processes of explanation and consensus would

provide a group member with a greater trust in the system and make it easier for the group to

come to a final decision with regards to a group recommendation.

These two processes are listed as areas of further research and are discussed further in the

next section.

e) Memory

In the PerTrust model, single group recommendations are generated. Therefore, any group

recommendations generated thereafter for the same group will not consider the previous

group recommendations (Quijano-Sanchez et al., 2013). As a result, there needs to be a

means of considering historical group recommendations (Quijano-Sanchez et al., 2013). The

implementation to resolve this is called memory (Quijano-Sanchez et al., 2013). This is

important as group members may become gradually dissatisfied over time. Therefore, the

satisfaction levels of each group member would have to be consistently evaluated (Masthoff,

2011; Quijano-Sanchez et al., 2013).

Methods of overcoming this are discussed in the next section.

Page 275: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

260

15.5 Further research

In this section, areas of further research are identified as a means to improve the PerTrust model and

overcome a number of the research limitations highlighted in the previous section. These areas of

further research include: consensus, cold start issue, memory, and personality.

a) Cold start issue

A consideration for further research is how to cater for new users apart from having users

attribute rating scores or trust statements. An interesting solution is proposed by Victor (2010)

who recommends that new users should link to one of three types of system users. The two

user types relevant to the PerTrust model are:

System users who have rated many recommendation items.

System users who are linked to many other system users via trust.

Another solution is proposed by Masthoff (2011). In this solution, new system users have

group recommendations calculated based on what is satisfactory to the rest of the group

(Masthoff, 2011). This method is based on the assumption that there is a measure of

similarity between the new system user and another member of the group (Masthoff, 2011). In

this, the recommender system learns the preferences of the system user over time (Masthoff,

2011).

Such an implementation and others is a consideration for further research.

b) Explanation and consensus

Two areas of consensus for further research are considered: namely explanations and group

consensus.

Jameson and Smyth (2007) define explanations as the process followed by a group

recommender system in relaying to the group members how the final group

recommendation was derived (Tintarev, 2007). A means of implementing this as part

of a group recommendation is advantageous in 3 ways:

o It would provide an extra means of ensuring group member satisfaction

(Jameson & Smyth, 2007; Tintarev, 2007).

o It would give the group member greater trust in the recommender system

(Jameson & Smyth, 2007; Tintarev, 2007).

o It would provide transparency (Jameson & Smyth, 2007; Masthoff, 2011;

Tintarev, 2007).

Jameson and Smyth (2007) define group consensus as the process followed by the

group recommender system to assist group members in coming to a final decision

regarding which group recommendation to eventually decide upon (Salamó et al.,

Page 276: PerTrust: leveraging personality and trust for group recommendations

Chapter 15 Conclusion

261

2012). Such an implementation would reduce the need for debating and arguing over

which group recommendation to select.

c) Memory

According to Quijano-Sanchez et al. (2013), memory is the ability of a group recommender

system to maintain a list of all item recommendations previously generated by the group

recommender system. The purpose of doing this is to maintain high satisfaction levels within

the group by analysing previous satisfaction levels for previous recommendations and

recompensing those group members for the current set of recommendations which need to

be recompensed.

Such an implementation would ensure that satisfaction levels could be monitored over time

with various system users compensated appropriately if they were disappointed on a previous

occasion. This implementation would be an area of further research.

d) Personality

In the PerTrust model, the evaluation of personality revealed inconclusive results. Therefore,

as a means of further research, further evaluations would be done on the current

implementation with other methods being investigated further.

Page 277: PerTrust: leveraging personality and trust for group recommendations

References

Page 278: PerTrust: leveraging personality and trust for group recommendations

References

263

Al Falahi, K., Mavridis, N. and Atif, Y. (2012). Social Networks and Recommender Systems: A World

of Current and Future Synergies. In Computational Social Networks (pp. 445–465). Edited by

Abraham, A. & Hassenien, A. doi: 10.1007/978-1-4471-4048-1_18

Amer-Yahia, S., Roy, S. B., Chawlat, A., Das, G., & Yu, C. (2009). Group recommendation:

Semantics and efficiency. In Proceedings of the VLDB Endowment, 2009, 754-765. Lyon: ACM

Andersen, N.P. (2011). Reducing Cold Start problem in the Wikipedia Recommender System,

http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/6487/pdf/imm6487.pdf, 2013

Aris, A., Mustaffa, N. and Zabarudin, N.S.N.M. (2011). Concepts and constructs in online trust.

Proceedings of the 2011 International Conference on Research and Innovation in Information

Systems (ICRIIS), pp. 1–6. doi: 10.1109/ICRIIS.2011.6125729

Avesani, P., Massa, P. and Tiella, R. (2005). A trust-enhanced recommender system application:

Moleskiing. In Proceedings of the 2005 ACM symposium on Applied computing, 2005, 1589–1593.

New York: ACM. doi 10.1145/1066677.1067036

Balajik.: (2002). Social Network Analysis,

http://www.bebr.ufl.edu/system/files/SNA_Encyclopedia_Entry_0.pdf [Accessed: 18 March 2012]

Baltrunas, L., Makcinskas, T. and Ricci, F. (2010). Group recommendations with rank aggregation

and collaborative filtering. In Proceedings of the fourth ACM conference on Recommender systems,

2010, 119–126. New York: ACM. doi: 10.1145/1864708.1864733

Berkovsky, S. and Freyne, J. (2010). Group-based recipe recommendations: analysis of data

aggregation strategies. In Proceedings of the fourth ACM conference on Recommender systems,

2010, 111–118. New York: ACM. doi: 10.1145/1864708.1864732

Bhuiyan, T., (2011). Trust-based automated recommendation making (Doctoral thesis). Brisbane,

Australia: Queensland University of Technology. Available from: http://eprints.qut.edu.au/49168/

Boostrap (n.d.). About – Bootstrap. Available from: http://getbootstrap.com/ [Accessed: 15 January

2014]

Boratto, L. and Carta, S. (2011). State-of-the-art in group recommendation and new approaches for

automatic identification of groups. In Information Retrieval and Mining in Distributed Environments,

volume 324 of Studies in Computational Intelligence (pp. 1–20). Edited by Soro, A., Vargiu, E.,

Armano, G., and Paddeu, G. doi: 10.1007/978-3-642-16089-9_1

Page 279: PerTrust: leveraging personality and trust for group recommendations

References

264

Bourke, S., McCarthy, K. and Smyth, B. (2011). Using social ties in group recommendation. In: AICS

2011: Proceedings of the 22nd Irish Conference on Artificial Intelligence and Cognitive Science, 2011.

University of Ulster: Intelligent Systems Research Centre. Available from:

http://hdl.handle.net/10197/3448

Cantador, I. and Castells, P. (2012). Group Recommender Systems: New Perspectives in the Social

Web. In Recommender Systems for the Social Web (pp. 139–157). doi: 10.1007/978-3-642-25694-

3_7

Carvalho, L.A.M.C. and Macedo, H.T. (2013). Users’ satisfaction in recommendation systems for

groups: an approach based on noncooperative games. In Proceedings of the 22nd international

conference on World Wide Web companion, 2013, 951–958. Geneva: ACM.

Chen, Y.-L., Cheng, L.-C. and Chuang, C.-N. (2008). A group recommendation system with

consideration of interactions among group members. Expert Systems with Applications 34(3):2082–

2090. doi: 10.1016/j.eswa.2007.02.008

Chung, K.K., Hossain, L. and Davis, J. (2005). Exploring sociocentric and egocentric approaches for

social network analysis. Proceedings of the 2nd international conference on knowledge management

in Asia Pacific, pp. 1-8.

Churchill, E.F. and Halverson, C.A. (2005). Guest Editors’ Introduction: Social Networks and Social

Networking. Internet Computing, IEEE 9(5):14–19. doi: 10.1109/MIC.2005.103

CPP, Inc (2009). History and Validity of the Thomas-Kilmann Conflict Mode Instrument. Available

from: https://www.cpp.com/products/tki/tki_info.aspx [Accessed: 24 September 2013].

De Graeve, K. (2011). HTML5 – Responsive Web Design. MSDN Magazine. Available from:

http://msdn.microsoft.com/en-us/magazine/hh653584.aspx [Accessed: 28 January 2014]

DeJordy, R. and Halgin, D. (2008). Introduction to ego network analysis. Boston College and the

Winston Center for Leadership & Ethics. Available from: http://www.analytictech.com/e-

net/PDWHandout.pdf [Accessed: 18 March 2012].

Dmethvin (3 July 2013). jQuery 1.10.2 and 2.0.3 Released [Web log post]. Available from:

http://blog.jquery.com/2013/07/03/jquery-1-10-2-and-2-0-3-released/ [Accessed: 28 January 2014]

Elio, R., Hoover, J., Nikolaidis, I., Salavatipour, M., Stewart, L. and Wong, K. (n.d.). About Computing

Science Research Methodology. . Available from:

http://webdocs.cs.ualberta.ca/~c603/readings/research-methods.pdf [Accessed: 11 January 2014].

Page 280: PerTrust: leveraging personality and trust for group recommendations

References

265

Epinions.com - Company Information: About Epinions. (n.d.). Available from:

http://www.epinions.com/about/?sb=1 [Accessed: 22 April 2013].

Freitas, R. (2009). Scientific Research Methods and Computer Science. MAP-I Seminars Workshop

2009. Available from: http://www.map.edu.pt/i/2008/map-i-research-methods-workshop-

2009/RicardoFreitasFinal.pdf [Accessed: 22 April 2013]

Garcia, I., Pajares, S., Sebastia, L. and Onaindia, E. (2012). Preference elicitation techniques for

group recommender systems. Information Sciences 189:155–175. doi: 10.1016/j.ins.2011.11.037

Gartrell, M., Xing, X., Lv, Q., Beach, A., Han, R., Mishra, S. and Seada, K. (2010). Enhancing group

recommendation by incorporating social relationship interactions. In: Proceedings of the 16th ACM

international conference on Supporting group work, 2010, 97–106. New York: ACM. doi:

10.1145/1880071.1880087

Gens, F. (2012). IDC Predictions 2013: Competing on the 3rd Platform. International Data

Corporation. Available from: http://www.idc.com/research/Predictions13/downloadable/238044.pdf

[Accessed: 7 December 2013]

Golbeck, J.A. (2005). Computing and applying trust in web-based social network (Doctoral thesis).

College Park, United States of America: University of Maryland. Available from:

http://www.cs.umd.edu/~golbeck/pubs/Golbeck%20-%202005%20-

%20Computing%20and%20Applying%20Trust%20in%20Web-based%20Social%20Networks.pdf

Golbeck, J. and Hendler, J. (2006). Filmtrust: Movie recommendations using trust in web-based social

networks. In Proceedings of the IEEE Consumer communications and networking conference, pp.

282-286. doi: 10.1109/CCNC.2006.1593032

Guha, R., Kumar, R., Raghavan, P., and Tomkins, A. (2004). Propagation of trust and distrust. In

Proceedings of the 13th International Conference on World Wide Web, 2004, 403–412. New York:

ACM. doi: 10.1145/988672.988727

Hanneman, R.A. and Riddle, M. (2005). Introduction to Social Network Methods, University of

California. Available from: faculty.ucr.edu/~hanneman/networks nettext.pdf [Accessed: 19 March

2012]

Hallberg, J., Norberg, M.B., Kristiansson, J., Synnes, K. and Nugent, C. (2007). Creating dynamic

groups using context-awareness. In Proceedings of the 6th international Conference on Mobile and

Ubiquitous Multimedia, 2007, 42–49. New York: ACM. doi: 10.1145/1329469.1329474

Page 281: PerTrust: leveraging personality and trust for group recommendations

References

266

Herr, S., Rösch, A., Beckmann, C. and Gross, T. (2012). Informing the design of group recommender

systems. In: CHI’12 Extended Abstracts on Human Factors in Computing Systems, 2012, 2507–2512.

New York: ACM. doi: 10.1145/2212776.2223827

Hussain, F.K., Hussain, O.K. and Chang, E. (2007). An overview of the interpretations of trust and

reputation. Proceedings of the IEEE Conference on Emerging Technologies and Factory Automation,

2007, pp. 826–830. doi: 10.1109/EFTA.2007.4416865

International Telecommunication Union (2013). The World in 2013: ICT Facts and Figures. Available

from: http://www.itu.int/en/ITU-D/Statistics/Documents/facts/ICTFactsFigures2013-e.pdf [Accessed: 7

December 2013]

International Telecommunication Union (2011). The World in 2011: ICT Facts and Figures. Available

from: http://www.itu.int/ITU-D/ict/facts/2011/material/ICTFactsFigures2011.pdf [Accessed: 21

December 2013]

Jamali, M. and Ester, M. (2009). Using a trust network to improve top-N recommendation. In

Proceedings of the third ACM conference on Recommender systems, 2009, 181–188. New York:

ACM. doi: 10.1145/1639714.1639745

Jameson, A. and Smyth, B. (2007). Recommendation to groups. In The adaptive web (pp. 596–627).

Edited by Brusilovsky, P., Kobsa, A., and Nejdl, W. doi: 10.1007/978-3-540-72079-9_20

Johnson, H., Lavesson, N., Zhao, H. and Wu, S.F. (2011). On the concept of trust in online social

networks. In Trustworthy Internet (pp. 143–157). Edited by Salgarelli, L., Bianchi, G., and Blefari-

Melazzi, N. doi: 10.1007/978-88-470-1818-1_11

Jøsang, A. (2011). Trust Management in Online Communities. New Forms of Collaborative Innovation

and Production on the Internet Internet - interdisciplinary perspectives. Göttingen: University Press

Göttingen. Edited by Wittke, V. and Hanekop, H. Available from:

http://folk.uio.no/josang/papers/Jos2010-SOFI.pdf

Jøsang, A. and Presti, S.L. (2004). Analysing the Relationship between Risk and Trust. In Trust

Management (pp. 135-145). Edited by Jensen, C., Poslad, .S, and Dimitrakos, T. doi: 10.1007/978-3-

540-24747-0_11

JSON (n.d.). Introducing JSON. Available from: http://www.json.org/ [Accessed 28 January 2014]

Page 282: PerTrust: leveraging personality and trust for group recommendations

References

267

Kim, J.K., Kim, H.K., Oh, H.Y. and Ryu, Y.U. (2010). A group recommendation system for online

communities. International Journal of Information Management 30(3):212–219. doi:

10.1016/j.ijinfomgt.2009.09.006

Kothari, C.R. (2004). Research Methodology: An Introduction. Research methodology: methods and

techniques. New Age International.

Lops, P., de Gemmis, M. and Semeraro, G. (2011). Content-based Recommender Systems: State of

the Art and Trends. In Recommender Systems Handbook (pp. 73-105). Edited by Ricci, F., Rokach,

L., Shapira, B., and Kantor, P.B. doi: 10.1007/978-0-387-85820-3_3

Marin, A. and Wellman, B. (2011). Social network analysis: An introduction. Handbook of Social

Network Analysis (pp. 11–25). Edited by Carrington, P. and Scott, J. London: Sage.

Markides, B.M. (2011). Trust in a decentralised mobile social network (Master’s thesis).

Johannesburg: University of Johannesburg: Available from: http://hdl.handle.net/10210/3806

Massa, P. and Avesani, P. (2007a). Trust metrics on controversial users: Balancing between Tyranny

of the majority. International Journal on Semantic Web and Information Systems (IJSWIS) 3(1): 39–

64. doi: 10.4018/jswis.2007010103

Massa, P. and Avesani, P. (2007b). Trust-aware recommender systems. In Proceedings of the 2007

ACM conference on Recommender systems, 2007, 17–24. New York: ACM. doi:

10.1145/1297231.1297235

Massa, P. and Avesani, P. (2009). Trust metrics in recommender systems. In Computing with social

trust (pp. 259–285). Edited by Golbeck, J. doi: 10.1007/978-1-84800-356-9_10

Massa, P., and Bhattacharjee, B., (2004). Using trust in recommender systems: an experimental

analysis. In Trust Management (pp. 221–235). Edited by Jensen, C., Poslad, S., and Dimitrakos, T.

doi: 10.1007/978-3-540-24747-0_17

Masthoff, J. (2004). Group modeling: Selecting a sequence of television items to suit a group of

viewers. User Modeling and User-Adapted Interaction 14(1):37–85. doi:

10.1023/B:USER.0000010138.79319.fd

Masthoff, J. (2011). Group recommender systems: Combining individual models. In Recommender

Systems Handbook (pp. 677–702). Edited by Ricci, F., Rokach, L., Shapira, B., and Kantor, P.B. doi:

10.1007/978-0-387-85820-3_21

Page 283: PerTrust: leveraging personality and trust for group recommendations

References

268

Meeker, M. and Wu, L. (2013). 2013 Internet Trends. Available from: http://www.kpcb.com/file/kpcb-

internet-trends-2013 [Accessed: 12 December 2013].

Microsoft (2013). Microsoft Visual Studio Professional 2012. Available from:

http://msdn.microsoft.com/en-us/data/ef.aspx [Accessed: 28 January 2014]

Microsoft (2014a). Entity Framework. Available from: http://www.microsoft.com/en-

za/download/details.aspx?id=30682 [Accessed: 15 January 2014]

Microsoft (2014b). What Is Windows Communication Foundation. Available from:

http://msdn.microsoft.com/en-us/library/ms731082(v=vs.110).aspx [Accessed: 29 January 2014]

Najjar, N.A. and Wilson, D.C. (2011). Evaluating Group Recommendation Strategies in Memory-

Based Collaborative Filtering. In: Proceedings of the ACM Recommender Systems Conference

Workshop on Human Decision Making in Recommender Systems, 2011, 43-51. Available from:

http://www.comp.hkbu.edu.hk/~lichen/download/DecisionsRecSys11_Proceedings.pdf [Accessed 27

January 2014]

Newton-King, J. (n.d.). JSON.NET. Available from: http://james.newtonking.com/json [Accessed: 28

January 2014]

O’Doherty, D. (2012). Structural trust inference for social recommendation (Master’s thesis). Louvain-

la-Neuve: Universite catholique de Louvain. Available from:

http://euranova.eu/upl_docs/publications/memoiredaireodoherty.pdf

O’Donovan, J. and Smyth, B. (2005). Trust in recommender systems. In Proceedings of the 10th

international conference on Intelligent user interfaces, 2005, 167–174. New York: ACM. doi:

10.1145/1040830.1040870

Pera, M.S. and Ng, Y.-K. (2012). A group recommender for movies based on content similarity and

popularity. Information Processing & Management 49(3):673-687. doi: 10.1016/j.ipm.2012.07.007

Popescu, G. and Pu, P. (2011). Group recommender systems as a voting problem. Semester report.

Lausanne: Ecole Polytechnique Fédérale de Lausanne. Available from:

http://hci.epfl.ch/members/george/Publications/Group%20recommender%20systems%20as%20a%20

voting%20problem.pdf [Accessed: 22 July 2013].

Quan, Q. and Hinze, A. (2008). Trust-based recommendations for mobile tourists in TIP. Working

paper: 13/2008. Hamilton: The University of Waikato. Available from:

http://www.cs.waikato.ac.nz/pubs/wp/2008/uow-cs-wp-2008-13.pdf [Accessed 17 March 2013].

Page 284: PerTrust: leveraging personality and trust for group recommendations

References

269

Quijano-Sanchez, L., Recio-Garcia, J.A., Diaz-Agudo, B. and Jimenez-Diaz, G. (2013). Social factors

in group recommender systems. ACM Transactions on Intelligent Systems and Technology 4(1):1–30.

doi: 10.1145/2414425.2414433

Quijano-Sanchez, L., Recio-Garcia, J.A. and Diaz-Agudo, B. (2011). HappyMovie: A Facebook

Application for Recommending Movies to Groups. In: 2011 23rd IEEE International Conference on

Tools with Artificial Intelligence (ICTAI), pp. 239–244. doi: 10.1109/ICTAI.2011.44

Recio-Garcia, J.A., Jimenez-Diaz, G., Sanchez-Ruiz, A.A. and Diaz-Agudo, B. (2009). Personality

aware recommendations to groups. In Proceedings of the third ACM conference on Recommender

systems, 2009, 325–328. New York: ACM. doi: 10.1145/1639714.1639779

Ricci, F., Rokach, L. and Shapira, B. (2011). Introduction to Recommender Systems Handbook. In

Recommender Systems Handbook (pp. 1-35). Edited by Ricci, F., Rokach, L., Shapira, B., and

Kantor, P.B. doi: 10.1007/978-0-387-85820-3_1

Salamó, M., McCarthy, K. and Smyth, B. (2012). Generating recommendations for consensus

negotiation in group personalization services. In Personal and Ubiquitous Computing 16(5):597–610.

doi: 10.1007/s00779-011-0413-1

Schaubhut, N.A. (2007) Technical Brief for the Thomas-Kilmann Conflict Mode Instrument. Available

from: https://www.cpp.com/pdfs/TKI_Technical_Brief.pdf [Accessed: 18 August 2013].

Shani, G. and Gunawardana, A. (2011). Evaluating recommendation systems. In Recommender

Systems Handbook (pp. 257-297). Edited by Ricci, F., Rokach, L., Shapira, B., and Kantor, P.B. doi:

10.1007/978-0-387-85820-3_8

Tintarev, N. (2007). Explanations of recommendations. In: Proceedings of the 2007 ACM conference

on Recommender systems, 2007, 203–206. New York: ACM. doi: 10.1145/1297231.1297275

Victor, P. (2010). Trust Networks for Recommender Systems (Doctoral thesis). Ghent, Belgium:

Ghent University, Faculty of Sciences. Available from: https://biblio.ugent.be/publication/986279

Victor, P., De Cock, M. and Cornelis, C. (2011). Trust and recommendations. In Recommender

Systems Handbook (pp. 645-675). Edited by Ricci, F., Rokach, L., Shapira, B., and Kantor, P.B. doi:

10.1007/978-0-387-85820-3_20

Page 285: PerTrust: leveraging personality and trust for group recommendations

Papers published

Page 286: PerTrust: leveraging personality and trust for group recommendations

Papers published

271

Leonard, J. and Coetzee, M. (2009). A model for a socially-aware mobile tourist recommendation

system. Proceedings of the 11th annual conference on WWW applications, ZAWWW09, Port

Elizabeth, 2-4 Sept 2009. Available from: http://hdl.handle.net/10210/5310