Journal of Applied Mathematics
Experimental Game Theory and Its Application in Sociology and Political Science
Guest Editors: Arthur Schram, Vincent Buskens, Klarita GΓ«rxhani, and Jens GroΓer
Experimental Game Theory and Its Application
in Sociology and Political Science
Journal of Applied Mathematics
Experimental Game Theory and Its Application
in Sociology and Political Science
Guest Editors: Arthur Schram, Vincent Buskens,Klarita GΓ«rxhani, and Jens GroΓer
Copyright Β© 2015 Hindawi Publishing Corporation. All rights reserved.
οΏ½is is a special issue published in βJournal of Applied Mathematics.β All articles are open access articles distributed under the CreativeCommons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the originalwork is properly cited.
Editorial Board
Saeid Abbasbandy, IranMina B. Abd-El-Malek, EgyptMohamed A. Abdou, EgyptSubhas Abel, IndiaJanos Abonyi, HungarySergei Alexandrov, RussiaM. Montaz Ali, South AfricaMohammad R, Aliha, IranCarlos J. S. Alves, PortugalMohamad Alwash, USAGholam R. Amin, OmanIgor Andrianov, GermanyBoris Andrievsky, RussiaWhye-Teong Ang, SingaporeAbul-Fazal M. Arif, KSASabri Arik, TurkeyAli R. Ashra, IranAllaberen Ashyralyev, TurkeyFrancesco Aymerich, ItalyJalel Azaiez, CanadaSeungik Baek, USAHamid Bahai, UKOlivier Bahn, CanadaAntonio Bandera, SpainJean-Pierre Barbot, FranceMostafa Barigou, UKRoberto Barrio, SpainHenri Baudrand, FranceAlfredo Bellen, ItalyVasile Berinde, RomaniaJafar Biazar, IranAnjan Biswas, KSAAbdellah Bnouhachem, MoroccoGabriele Bonanno, ItalyWalter Briec, FranceJames Robert Buchanan, USAHumberto Bustince, SpainXiao Chuan Cai, USAPiermarco Cannarsa, ItalyYijia Cao, ChinaJinde Cao, ChinaZhenfu Cao, ChinaBruno Carpentieri, NetherlandsAna Carpio, SpainAlexandre Carvalho, Brazil
Song Cen, ChinaTai-Ping Chang, TaiwanShih-sen Chang, ChinaWei-Der Chang, TaiwanShuenn-Yih Chang, TaiwanKripasindhu Chaudhuri, IndiaYuming Chen, CanadaJianbing Chen, ChinaXinkai Chen, JapanRushan Chen, ChinaKe Chen, UKZhang Chen, ChinaZhi-Zhong Chen, JapanRu-Dong Chen, ChinaHui Cheng, ChinaChin-Hsiang Cheng, TaiwanEric Cheng, Hong KongChing-Hsue Cheng, TaiwanQi Cheng, USAJin Cheng, ChinaFrancisco Chiclana, UKJen-Tzung Chien, TaiwanChongdu Cho, KoreaHan H. Choi, Republic of KoreaMd S. Chowdhury, MalaysiaYu-Ming Chu, ChinaHung-Yuan Chung, TaiwanAngelo Ciaramella, ItalyPedro J. Coelho, PortugalCarlos Conca, ChileVitor Costa, PortugalLirong Cui, ChinaJie Cui, USALivija Cveticanin, SerbiaGaoliang Dai, GermanyHua Dai, ChinaBinxiang Dai, ChinaMichael Defoort, FranceHilmi Demiray, TurkeyMingcong Deng, JapanYoujun Deng, FranceOrazio Descalzi, ChileRaaele Di Gregorio, ItalyKai Diethelm, GermanyDaniele Dini, UK
Urmila Diwekar, USAVit Dolejsi, Czech RepublicBoQing Dong, ChinaRodrigo W. dos Santos, BrazilWenbin Dou, ChinaRafael Escarela-Perez, MexicoMagdy A. Ezzat, EgyptMeng Fan, ChinaYa Ping Fang, ChinaIstvΓ‘n FaragΓ³, HungaryDidier Felbacq, FranceRicardo Femat, MexicoAntonio J. M. Ferreira, PortugalGeorge Fikioris, GreeceMichel Fliess, FranceMarco A. Fontelos, SpainDimitris Fotakis, SerbiaTomonari Furukawa, USAMaria Gandarias, SpainXiao-wei Gao, ChinaHuijun Gao, ChinaXin-Lin Gao, USANaiping Gao, ChinaLaura Gardini, ItalyWinston Garira, South AfricaLeonidas N. Gergidis, GreeceBernard J. Geurts, NetherlandsSandip Ghosal, USADibakar Ghosh, IndiaPablo GonzΓ‘lez-Vera, SpainAlexander N. Gorban, UKKeshlan S. Govinder, South AfricaSaid R. Grace, EgyptJose L. Gracia, SpainMaurizio Grasselli, ItalyZhi-Hong Guan, ChinaNicola Guglielmi, ItalyFrΓ©dΓ©ric Guichard, CanadaKerim Guney, TurkeyShu-Xiang Guo, ChinaVijay Gupta, IndiaSaman K. Halgamuge, AustraliaRidha Hambli, FranceAbdelmagid S. Hamouda, QatarBo Han, China
Maoan Han, ChinaPierre Hansen, CanadaFerenc Hartung, HungaryXiao-Qiao He, ChinaYuqing He, ChinaNicolae Herisanu, RomaniaOnesimo Hernandez-Lerma, MexicoLuis Javier Herrera, SpainJ. C. D. Hoenderkamp, NetherlandsοΏ½omas HΓΆhne, GermanyWei-Chiang Hong, TaiwanSun-Yuan Hsieh, TaiwanYing Hu, FranceNing Hu, JapanJianguo Huang, ChinaDan Huang, ChinaTing-Zhu Huang, ChinaZhilong L. Huang, ChinaZhenkun Huang, ChinaAsier Ibeas, SpainMustafa Inc, TurkeyGerardo Iovane, ItalyAnuar Ishak, MalaysiaTakeshi Iwamoto, JapanGeorge Jaiani, GeorgiaGunHee Jang, KoreaZhongxiao Jia, ChinaDaqing Jiang, ChinaHaijun Jiang, ChinaJianjun Jiao, ChinaZhen Jin, ChinaXing Jin, ChinaLucas JΓ³dar, SpainZlatko Jovanoski, AustraliaTadeusz Kaczorek, PolandDongsheng Kang, ChinaSangmo Kang, USAIdo Kanter, IsraelAbdul Hamid Kara, South AfricaDimitrios A. Karras, GreeceIhsan Kaya, TurkeyDogan Kaya, TurkeyChaudry M. Khalique, South AfricaWaqar A. Khan, PakistanKhalil Khanafer, USAHyunsung Kim, KoreaYounjea Kim, Republic of KoreaJong Hae Kim, Republic of Korea
Svein Kleiven, SwedenKazutake Komori, JapanVassilis Kostopoulos, GreeceJisheng Kou, ChinaRoberto A. Kraenkel, BrazilKannan Krithivasan, IndiaVadim A. Krysko, RussiaJin L. Kuang, SingaporeGennady M. Kulikov, RussiaVenugopal Kumaran, IndiaMiroslaw Lachowicz, PolandHak-Keung Lam, UKTak-Wah Lam, Hong KongHeung-Fai Lam, Hong KongLuciano Lamberti, ItalyPeter G L Leach, CyprusUsik Lee, Republic of KoreaJaehong Lee, KoreaWen-Chuan Lee, TaiwanMyung-Gyu Lee, KoreaJinsong Leng, ChinaXiang Li, ChinaQingdu Li, ChinaWenlong Li, Hong KongShuai Li, Hong KongLixiang Li, ChinaWan-Tong Li, ChinaShiyong Li, ChinaQin Li, ChinaHua Li, SingaporeYan Liang, ChinaJin Liang, ChinaDongfang Liang, UKChing-Jong Liao, TaiwanTeh-Lu Liao, TaiwanYong-Cheng Lin, ChinaGao Lin, ChinaYiping Lin, ChinaChong Lin, ChinaIgnacio Lira, ChileSheng Liu, AustraliaZhengrong Liu, ChinaZhuangjian Liu, SingaporeMeiqin Liu, ChinaPeter Liu, TaiwanPeide Liu, ChinaPengfei Liu, ChinaChein-Shan Liu, Taiwan
Yansheng Liu, ChinaZhijun Liu, ChinaKang Liu, USAWeiqing Liu, ChinaTao Liu, ChinaShutian Liu, ChinaFei Liu, ChinaChongxin Liu, ChinaJose L. LΓ³pez, SpainShiping Lu, ChinaHongbing Lu, ChinaBenzhuo Lu, ChinaHao Lu, USAYuan Lu, ChinaHenry Horng-Shing Lu, TaiwanGilles Lubineau, KSAZhen Luo, ChinaLifeng Ma, ChinaChangfeng Ma, ChinaLi Ma, ChinaRuyun Ma, ChinaShuping Ma, ChinaKrzysztof Magnucki, PolandNazim I. Mahmudov, TurkeyOluwole D. Makinde, South AfricaFlavio Manenti, ItalyFrancisco J. MarcellΓ‘n, SpainVasile Marinca, RomaniaGiuseppe Marino, ItalyGuiomar MartΓn-HerrΓ‘n, SpainCarlos MartΓn-Vide, SpainAlessandro Marzani, ItalyNikos E. Mastorakis, BulgariaNicola Mastronardi, ItalyPanayotis Takis Mathiopouloss, GreeceGianluca Mazzini, ItalyMichael McAleer, NetherlandsStephane Metens, FranceMichael Meylan, AustraliaEfren Mezura-Montes, MexicoFan Min, ChinaAlain Miranville, FranceRam N. Mohapatra, USASyed A. Mohiuddine, KSAGisele Mophou, FranceJosΓ© Morais, PortugalCristinel Mortici, RomaniaEmmanuel Moulay, France
Jaime E. Munoz Rivera, BrazilJavier Murillo, SpainRoberto Natalini, ItalySrinivasan Natesan, IndiaTatsushi Nishi, JapanMustapha Nourelfath, CanadaAndreas "Ochsner, AustraliaWlodzimierz Ogryczak, PolandRoger Ohayon, FranceJavier Oliver, SpainSoontorn Oraintara, USADonal OβRegan, IrelandMartin Ostoja-Starzewski, USATurgut "OziΕ, TurkeyClaudio Padra, ArgentinaVincent Pagneux, FranceReinaldo Martinez Palhares, BrazilQuanke Pan, ChinaEndre Pap, SerbiaSehie Park, KoreaManuel Pastor, SpainGiuseppe Pellicane, South AfricaFrancesco Pellicano, ItalyJuan Manuel PeΓ±a, SpainJian-Wen Peng, ChinaRicardo Perera, SpainMalgorzata Peszynska, USAAllan C. Peterson, USAAndrew Pickering, SpainSomyot Plubtieng,οΏ½ailandHector Pomares, SpainRoland Potthast, UKKerehalli V. Prasad, IndiaRadu-Emil Precup, RomaniaMario Primicerio, ItalyMorteza Rafei, NetherlandsP. Rattanadecho, οΏ½ailandLaura Rebollo-Neira, UKMahmoud M. Reda Taha, USARoberto RenΓ², ItalyBruno G. M. Robert, FranceJ. A. Rodriguez-VelΓ‘zquez, SpainIgnacio Rojas, SpainCarla Roque, PortugalDebasish Roy, IndiaImre J. Rudas, HungaryAbbas Saadatmandi, IranKunihiko Sadakane, Japan
Samir H. Saker, EgyptR. Sakthivel, Republic of KoreaJuan J. Salazar GonzΓ‘lez, SpainMiguel A. F. Sanjuan, SpainBogdan Sasu, RomaniaRichard Saurel, FranceWolfgang Schmidt, GermanyMehmet Sezer, TurkeyNaseer Shahzad, KSAPengjian Shang, ChinaM. Shariyat, IranHui-Shen Shen, ChinaJian Hua Shen, ChinaYong Shi, ChinaYasuhide Shindo, JapanPatrick Siarry, FranceFernando SimΓ΅es, PortugalοΏ½eodore E. Simos, GreeceGeorgios Sirakoulis, GreeceRobert Smith?, CanadaFrancesco Soldovieri, ItalyAbdel-Maksoud A. Soliman, EgyptXinyu Song, ChinaQiankun Song, ChinaYuri N. Sotskov, BelarusIvanka Stamova, USANiclas StrΓΆmberg, SwedenRay K.L. Su, Hong KongHousheng Su, ChinaChengjun Sun, Hong KongJitao Sun, ChinaWenyu Sun, ChinaW. Y. Szeto, Hong KongToshio Tagawa, JapanYing Tan, ChinaSan-Yi Tang, ChinaXianHua Tang, ChinaNasser-Eddine Tatar, KSAZhidong Teng, ChinaEngang Tian, ChinaAlexander Timokha, NorwayAydin Tiryaki, TurkeyYiying Tong, USAHossein Torkaman, IranMariano Torrisi, ItalyJuan J. Trujillo, SpainJung-Fa Tsai, TaiwanGeorge Tsiatas, Greece
Charalampos Tsitouras, GreeceAntonia Tulino, USAStefano Ubertini, ItalyJeong S. Ume, KoreaAlexandrina Untaroiu, USASergey Utyuzhnikov, UKKuppalapalle Vajravelu, USAAlvaro Valencia, ChileOlaf van der Sluis, NetherlandsErik Van Vleck, USAEzio Venturino, ItalyJesus Vigo-Aguiar, SpainMichael N. Vrahatis, GreeceMichael Vynnycky, SwedenXiang Wang, ChinaYaonan Wang, ChinaShuming Wang, SingaporeYouqing Wang, ChinaDongqing Wang, ChinaPeiguang Wang, ChinaYuh-Rau Wang, TaiwanHeng Wang, SingaporeBaolin Wang, ChinaMingxin Wang, ChinaQing-WenWang, ChinaGuangchen Wang, ChinaJunzo Watada, JapanWei Wei, ChinaJunjie Wei, ChinaJinjia Wei, ChinaGuoliang Wei, ChinaLi Weili, ChinaMartin Weiser, GermanyFrank Werner, GermanyMan Leung Wong, Hong KongXiangjun Wu, ChinaCheng Wu, ChinaShi-Liang Wu, ChinaWei Wu, ChinaQingbiao Wu, ChinaYuqiang Wu, ChinaMin-Hsien Wu, TaiwanXian Wu, ChinaTiecheng Xia, ChinaYonghui Xia, ChinaXian Xian-Yong, ChinaXuejun Xie, ChinaGongnan Xie, China
Daoyi Xu, ChinaYuesheng Xu, USAZhiqiang Xu, ChinaWei Xu, ChinaFuzhen Xuan, ChinaGregory S. Yablonsky, USAChao Yan, USAChao Yang, ChinaChunyu Yang, ChinaGuowei Yang, ChinaBin Yang, ChinaXiao-Jun Yang, ChinaDar-Li Yang, TaiwanNa Yang, ChinaChao-Tung Yang, TaiwanZhichun Yang, ChinaHer-Terng Yau, TaiwanWei-Chang Yeh, TaiwanGuan H. Yeoh, Australia
Chih-Wei Yi, TaiwanSimin Yu, ChinaBo Yu, ChinaXiaohui Yuan, ChinaGonglin Yuan, ChinaJinyun Yuan, BrazilRafal Zdunek, PolandAshraf Zenkour, EgyptGuisheng Zhai, JapanJianming Zhan, ChinaMeng Zhan, ChinaZhengqiu Zhang, ChinaJiangang Zhang, ChinaKe Zhang, ChinaLong Zhang, ChinaSheng Zhang, ChinaJifeng Zhang, ChinaHeping Zhang, ChinaHenggui Zhang, UK
Liang Zhang, ChinaJingxin Zhang, AustraliaZhihua Zhang, ChinaShan Zhao, USAChongbin Zhao, AustraliaBaosheng Zhao, ChinaYun-Bo Zhao, ChinaRenat Zhdanov, USADong Zheng, USAHuaichun Zhou, ChinaBin Zhou, ChinaShangbo Zhou, ChinaHuamin Zhou, ChinaQuanxin Zhu, ChinaXinqun Zhu, AustraliaWilliam Zhu, ChinaGoangseup Zi, KoreaZhiqiang Zuo, China
Contents
Experimental GameοΏ½eory and Its Application in Sociology and Political Science, Arthur Schram,Vincent Buskens, Klarita GΓ«rxhani, and Jens GroΓerVolume 2015, Article ID 280789, 2 pages
οΏ½e Strategic Role of Nonbinding Communication, Luis A. Palacio, Alexandra CortΓ©s-Aguilar, andManuel MuΓ±oz-HerreraVolume 2015, Article ID 910614, 11 pages
Intermediaries in Trust: Indirect Reciprocity, Incentives, and Norms, Giangiacomo Bravo, FlaminioSquazzoni, and KΓ‘roly TakΓ‘csVolume 2015, Article ID 234528, 12 pages
Altruism, Noise, and the Paradox of Voter Turnout: An Experimental Study, Sarah A. TulmanVolume 2015, Article ID 972753, 22 pages
Preference for Eciency or Confusion? A Note on a Boundedly Rational Equilibrium Approach to
Individual Contributions in a Public Good Game, Luca Corazzini and Marcelo TyszlerVolume 2015, Article ID 961930, 8 pages
Competition, Income Distribution, and the Middle Class: An Experimental Study, Bernhard Kittel,Fabian Paetzel, and Stefan TraubVolume 2015, Article ID 303912, 15 pages
EditorialExperimental Game Theory and Its Application inSociology and Political Science
Arthur Schram,1,2 Vincent Buskens,3 Klarita GΓ«rxhani,4 and Jens GroΓer5
1CREED, University of Amsterdam, Plantage Muidergracht 14, 1018 TV Amsterdam, Netherlands2Robert Schuman Centre for Advanced Studies, European University Institute, Fiesole, Italy3Department of Sociology/ICS, Utrecht University, Padualaan 14, 3584 CH Utrecht, Netherlands4Department of Political and Social Sciences, European University Institute, Via Roccettini 9, San Domenico di Fiesole,50014 Fiesole, Italy5Department of Political Science, Florida State University, Bellamy Hall 531, Tallahassee, FL 32306-2230, USA
Correspondence should be addressed to Arthur Schram; [email protected]
Received 13 August 2015; Accepted 16 August 2015
Copyright Β© 2015 Arthur Schram et al.This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Game theory, laboratory experiments, and field experimentsare common and powerful tools in many social sciences[1]. However, applications in Sociology and Political Scienceremain scarce and scattered [2]. Yet, the combination of gametheory with controlled experiments provides a powerful toolto better understand social and political processes, for exam-ple, [3β5].Themathematical structure offered by game theoryand the control offered by an experimental environmentallow the researcher to isolate sociological and/or politicalphenomena to study their development and their effects.The relationship between game theory and experiments istwofold. On the one hand, game theory provides solid groundon which to design an experiment and a formal benchmarkthat serves as a measuring rod for a structured analysisof observed behavior. On the other hand, experiments canbe used to test equilibrium predictions and to pinpointshortcomings of theory as well as point to directions in whichthe theory can be adapted.
The aim of the special issue is to encourage originalresearch that seeks to study sociological or political phenom-ena using laboratory experiments that are based on game the-oretical benchmarks and that seek mathematical modeling
of game theoretical arguments to inspire experiments in thefields of Sociology and Political Science, and vice versa.
In a research article of the special issue, G. Bravo et al.experimentally study whether intermediaries can positivelyinfluence cooperation between a trustor and a trustee inan investment or trust game. Another article by L. A.Palacio et al. develops a game theoretical foundation forexperimental investigations of the strategic role in gameswithnonbinding communication. In another article, L. Corazziniand M. Tyszler employ quantal response equilibrium (QRE)to find out the extent of confusion and efficiency motivesof laboratory participants in their decisions to contribute topublic good. The article by S. A. Tulman utilizes QRE (i.e.,noisy decision-making) and altruism-motivated players toinvestigate the βparadox of voter turnoutβ in a participationgame experiment. Finally, in another article, B. Kittel et al.present a laboratory study in which they examine the roleof the middle class on income distribution within the frame-work of a contest game.
We hope that the selection of articles in this special issuewill help to inspire scholars in Sociology and Political Scienceto add mathematics to their tool box and adopt game theoryand experimentation in their research methodology.
Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2015, Article ID 280789, 2 pageshttp://dx.doi.org/10.1155/2015/280789
http://dx.doi.org/10.1155/2015/280789
2 Journal of Applied Mathematics
Acknowledgment
We sincerely thank all the authors and reviewers who con-tributed greatly to the success of the special issue.
Arthur SchramVincent BuskensKlarita GeΜrxhani
Jens GroΓer
References
[1] A. Falk and J. J. Heckman, βLab experiments are a major sourceof knowledge in the social sciences,β Science, vol. 326, no. 5952,pp. 535β538, 2009.
[2] M. Jackson and D. R. Cox, βThe principles of experimentaldesign and their application in sociology,β Annual Review ofSociology, vol. 39, pp. 27β49, 2013.
[3] R. Corten and V. Buskens, βCo-evolution of conventions andnetworks: an experimental study,β Social Networks, vol. 32, no.1, pp. 4β15, 2010.
[4] K. GeΜrxhani, J. Brandts, and A. Schram, βThe emergenceof employer information networks in an experimental labormarket,β Social Networks, vol. 35, no. 4, pp. 541β560, 2013.
[5] J. GroΓer and A. Schram, βPublic opinion polls, voter turnout,and welfare: an experimental study,β American Journal of Polit-ical Science, vol. 54, no. 3, pp. 700β717, 2010.
Research ArticleThe Strategic Role of Nonbinding Communication
Luis A. Palacio,1 Alexandra CortΓ©s-Aguilar,1 and Manuel MuΓ±oz-Herrera2
1Escuela de EconomΔ±Μa y AdministracioΜn, Universidad Industrial de Santander, Calle 9 con 27, Bucaramanga, Colombia2ICS, Faculty of Behavioural Social Sciences, University of Groningen, Grote Rozenstraat 31, 9712 TG Groningen, Netherlands
Correspondence should be addressed to Manuel MunΜoz-Herrera; [email protected]
Received 1 December 2014; Revised 27 March 2015; Accepted 28 March 2015
Academic Editor: Jens GroΓer
Copyright Β© 2015 Luis A. Palacio et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper studies the conditions that improve bargaining power using threats and promises. We develop a model of strategiccommunication, based on the conflict game with perfect information, in which a noisy commitment message is sent by a better-informed sender to a receiverwho takes an action that determines thewelfare of both.Ourmodel captures different levels of aligned-preferences, for which classical games such as stag hunt, hawk-dove, and prisonerβs dilemma are particular cases. We characterisethe Bayesian perfect equilibrium with nonbinding messages under truth-telling beliefs and senderβs bargaining power assumptions.Through our equilibrium selection we show that the less conflict the game has, the more informative the equilibrium signal is andless credibility is necessary to implement it.
1. Introduction
Bargaining power refers to the relative ability that a player hasin order to exert influence upon others to improve her ownwellbeing. It is related also to idiosyncratic characteristicssuch as patience, so that a player turns the final outcome intoher favour if she has better outside options or if she is morepatient [1]. In addition, Schelling [2] described bargainingpower as the chance to cheat and bluff, the ability to setthe best price for oneself. For instance, when the union saysto the management in a firm, βwe will go on strike if youdo not meet our demands,β or when a nation announcesthat any military provocation will be responded with nuclearweapons, it is clear that communication has been used with astrategic purpose, to gain bargaining power.
In bargaining theory, strategic moves are actions takenprior to playing a subsequent game, with the aim of changingthe available strategies, information structure, or payofffunctions.The aim is to change the opponentβs beliefs,makingit credible that the position is unchangeable. Following Selten[3], the formal notion of credibility is subgame perfectness.(Schelling developed the notion of credibility as the outcomethat survives iterated elimination of weakly dominated strate-gies. We know that, in the context of generic extensive-formgames with complete and perfect information, this procedure
does indeed work (see [4]).) Nevertheless, we argue that ifa message is subgame perfect, then it is neither a threat nora promise. Consider the following example: a union says tomanagement: βIf you increase our salaries, we will be grate-ful.β In such case, credibility is not in doubt, but we couldhardly call this a promise or a threat. Schelling [2] denotesfully credible messages as warnings; and we follow thisdifferentiation to threats and promises.
Commitment theory was proposed by Schelling [2] (fora general revision of Schellingβs contribution to economictheory, see Dixit [4] and Myerson [5]), who introduced atactical approach for communication and credibility insidegame theory. Hirshliefer [6, 7] and Klein and OβFlaherty [8]worked on the analysis and characterisation of strategicmoves in the standard game theory framework. In thesame way, Crawford and Sobel [9] formally showed thatan informed agent could reveal his information in order toinduce the uninformed agent to make a specific choice.
There are three principal reasons for modelling pre-play communication: information disclosure (signalling),coordination goals (cheap-talk), and strategic influence (inSchellingβs sense). Following Farrell [10] and Farrell andRabin [11], the main problem in modelling nonbinding mes-sages is the βbabbling equilibrium,β where statements meannothing. However, they showed that cheap talk can convey
Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2015, Article ID 910614, 11 pageshttp://dx.doi.org/10.1155/2015/910614
http://dx.doi.org/10.1155/2015/910614
2 Journal of Applied Mathematics
information in a general signalling environment, displayinga particular equilibrium in which statements are meaningful.In this line, Rabin [12] developed credible message profiles,looking for a meaningful communication equilibrium incheap-talk games.
Our paper contributes to the strategic communicationliterature in three ways. First, we propose a particularcharacterisation of warnings, threats, and promises in theconflict game with perfect information, as mutually exclusivecategories. For this aim, we first define a sequential protocolin the 2 Γ 2 conflict game originally proposed by Baliga andSjoΜstroΜm [13]. This benchmark game is useful because it isa stylised model that captures different levels of aligned-preferences, for which classical games such as stag hunt,hawk-dove, and prisonerβs dilemma are particular cases.
Second, we model strategic moves with nonbindingmessages, showing that choosing a particular message and itscredibility are related to the level of conflict. In this way, theconflict game with nonbinding messages captures a bargainingsituation where people talk about their intentions, by simplyusing cheap talk. More precisely, we analyse a game wherea second player (the sender) can communicate her actionplan to the first mover (the receiver). (To avoid confusionand gender bias, the sender will be denoted as βshe,β and thereceiver as βhe.β) In fact, the sender must decide after sheobserves the receiverβs choice, but the commitment messageis a preplay move.
Third, we introduce a simple parameterisation that canbe used as a baseline for experimental research. By means ofthismodel it is possible to study how, in a bargaining environ-ment, information and communication influence the powerone of the parts may have. In other words, this addressesthe following: the logic supporting Nash equilibrium is thateach player is thinking, given what the other does, whatis the best he could do. Players, in a first stance, cannotinfluence othersβ behaviour. On the contrary, Schelling [2]argues that players may consider what they can do, as apreplay move, to influence (i.e., manipulate) the behaviourof their counterpart and turn their payoffs in their favour.Therefore, our behavioural model provides a frameworkwhere it is possible to (experimentally) study the strategicuse of communication in order to influence others, underdifferent levels of conflict.
We analyse conceptually the importance of three essentialelements of commitment theory: (i) the choice of a responserule, (ii) the announcement about future actions, and (iii) thecredibility of messages. We answer the following questions:what is the motivation behind threats and promises? and canbinding messages improve the senderβs bargaining power? Inthis paper, threats and promises are defined as a secondmoverself-serving announcement, committing in advance how shewill play in all conceivable eventualities, as long as it specifiesat least one action that is not her best response (see [4, 7]).With this definition, we argue that bindingmessages improvethe senderβs bargaining power in the perfect informationconflict game, even when it is clear that by assuming bindingmessages we avoid the problem of credibility.
The next step is to show that credibility is related tothe probability that the sender fulfills the action specified in
Table 1: The 2 Γ 2 conflict game.
Player 2π β
Player 1 π 1, 1 π₯, π¦β π¦, π₯ 0.25, 0.25
the nonbinding message. For this, we highlight that playersshare a common language, and the literal meaning mustbe used to evaluate whether a message is credible or not.Hence, the receiver has to believe in the literal meaning ofannouncements if and only if it is highly probable to face thetruth. Technically, we capture this intuition in two axioms:truth-telling beliefs and the senderβs bargaining power. We ask,are nonbinding messages a mechanism to improve the senderβsbargaining power? and how much credibility is necessary for astrategic move to be successful? In equilibrium, we can provethat nonbinding messages will convey private informationwhen the conflict is low. On the other hand, if the conflictis high, there are too strong incentives to lie, and cheap talkbecomes meaningless. However, even in the worse situation,the nonbinding messages can transmit some meaning inequilibrium if the players focus on the possibility of fulfillingthreats and promises.
The paper is organised as follows. In Section 2, the conflictgame is described. In Section 3 the conditioned messageswill be analysed, and the definitions of threats and promisesare presented. Section 4 presents the model with nonbind-ing messages, showing the importance of response rules,messages, and credibility to improve the senderβs bargainingpower. Finally, Section 5 concludes.
2. The 2 Γ 2 Conflict Game
The 2 Γ 2 conflict game is a noncooperative symmetricenvironment. There are two decision makers in the set ofplayers,π = {1, 2}. (In this level of simplicity, playersβ identityis not relevant, but since the purpose is to model Schellingβsstrategic moves, in the following sections player 2 is going tobe a sender of commitment messages.) Players must choosean action π
πβ ππ= {π, β}, where π represents being dove
(peaceful negotiator) and β being hawk (aggressive negotia-tor).The utility function π’
π(π 1, π 2) for player π is defined by the
payoffs matrix in Table 1, where rows correspond to player 1and columns correspond to player 2.
Note that both mutual cooperation and mutual defectionlead to equal payoffs, and the combination of strategies (π, π)is always Pareto optimal. In the same way, the combinationof strategies (β, β) is not optimal and can only be understoodas the disagreement point. Assuming that π¦ β₯ π₯, payoffs areunequal when a player behaves aggressively and the othercooperates, given that the player who plays aggressively hasan advantage over his/her opponent. In addition, we willassume that π₯ ΜΈ= 0.25 and π¦ ΜΈ= 1 to avoid the multiplicityof irrelevant equilibria. Therefore, it will always be preferredthat the opponent chooses π. To have a parameterisation thatserves as a baseline for experimental design, it is desirableto fix π₯ β [0, 0.5] and π¦ β [0.5, 1.5] within these intervals,
Journal of Applied Mathematics 3
Table 2: Nash equilibria in the conflict game.
(π β
1, π β
2) (π
β
1, πβ
2) Pareto optimal
C1 (π, π) (1, 1) Yes
C2 (π, π) (1, 1) Yes(β, β) (0.25, 0.25) No
C3 (π, β) (π₯, π¦) Yes(β, π) (π¦, π₯) Yes
C4 (β, β) (0.25, 0.25) No
because if they are modelled as random variables withuniform distribution we would have four games with thesame probability of occurring.
Under these assumptions, the 2 Γ 2 conflict game hasfour particular cases that, according to Hirshliefer [6], can beordered by their level of conflict or affinity in preferences:
(1) Level of conflict 1 (C1): if π¦ < 1 and π₯ > 0.25, thereis no conflict in this game because cooperating is adominant strategy.
(2) Level of conflict 2 (C2): if π¦ < 1 and π₯ < 0.25, thisis the so-called stag hunt game, which formalises theidea that lack of trust may lead to disagreements.
(3) Level of conflict 3 (C3): if π¦ > 1 and π₯ > 0.25,depending on the history used to contextualise it, thisgame is known as either hawk-dove or chicken game.Both anticipation and dissuasion are modelled here,where fear of consequences makes one of the partsgive up.
(4) Level of conflict 4 (C4): ifπ¦ > 1 andπ₯ < 0.25, this is theclassic prisoners dilemma, where individual incentiveslead to an inefficient allocation of resources.
Based on the system of incentives, it is possible to explainwhy these games are ordered according to their level ofconflict, from lowest to highest (see Table 2). In the C1game the playersβ preferences are well aligned and there isno coordination problem because the Nash equilibrium isunique in dominant strategies. Therefore, a rational playerwill always choose to cooperate π, which will lead to theoutcome that is Pareto optimal. In the C2 game mutualcooperation (π, π) is a Nash equilibrium, but it is not uniquein pure strategies. The problem lies in coordinating on eithera Pareto dominant equilibrium (π, π) or a risk dominantequilibrium (β, β). In other words, negotiating as a doveimplies a higher risk and will only take place if a playerbelieves that the adversary will do the same.This is the reasonwhy it is possible to state that lack of trust between the partiesmay lead to the disagreement point.
The C3 game portrays an environment with higher levelsof conflict, since there are two equilibriawith unequal payoffs.In other words, players face two problems, a distributive and acoordination one. If only one of the players chooses to behaveaggressively, this will turn the result in his/her favour, but itis impossible to predict who will be aggressive and who willcooperate. In this 2Γ2 environment there is no clear criterionto predict the final outcome and therefore the behaviour.
Table 3: The conflict game: illustrative cases.
(a) C1: π¦ β π₯ = 0
2π β
1 π 1, 1 0.5, 0.5β 0.5, 0.5 0.25, 0.25
(b) C2: π¦ β π₯ = 0.5
2π β
1 π 1, 1 0, 0.5β 0.5, 0 0.25, 0.25
(c) C3: π¦ β π₯ = 1
2π β
1 π 1, 1 0.5, 1.5β 1.5, 0.5 0.25, 0.25
(d) C4: π¦ β π₯ = 1.5
2π β
1 π 1, 1 0, 1.5β 1.5, 0 0.25, 0.25
The last game is the classical social dilemma about thelimitations of rational behaviour to allocate resources effi-ciently. The C4 game is classified as the most conflictive onebecause the players are facedwith a context where the rationalchoice clearly predicts that the disagreement point will bereached. Additionally, we will argue along this document thatchanging incentives to achieve mutual cooperation is not asimple task in this bargaining environment.
Until this moment we have used equilibrium unicity andits optimality to argue that the games are ordered by theirlevel of conflict. However, it is possible to understand thedifference in payoffs (π¦ β π₯) as a proxy of the level of conflict.In other words, the difference in payoffs between the playerwho takes the advantage by playing aggressively and theplayer who is exploited for cooperating is large, we can statethat the incentives lead players to a preference to behaveaggressively (see the illustrative cases in Table 3).
3. Response Rules and Commitment Messages
We consider now the conflict gamewith a sequential decisionmaking protocol. The idea is to capture a richer set ofstrategies that allows us to model threats and promises asself-serving messages. In addition, the set of conditionedstrategies include the possibility of implementing ordinarycommitment, because a simple unconditional message isalways available for the sender.
Schelling [2] distinguishes between two different typesof strategic moves: ordinary commitments and threats. Anordinary commitment is the possibility of playing first,
4 Journal of Applied Mathematics
d h
d h d h
1
1
x
y
y
x
0.25
0.25
P1
P2P2
u1
u2
Figure 1: The conflict game with perfect information.
announcing that a decision has already been made and itis impossible to be changed, which forces the opponent tomake the final choice. On the other hand, threats are secondplayer moves, where she convincingly pledges to respond tothe opponentβs choice in a specified contingent way (see [7]).
3.1.TheConflict Gamewith Perfect Information. Suppose thatplayer 1moves first and player 2 observes the action made byplayer 1 and makes his choice. In theoretical terms, this is aswitch from the 2 Γ 2 strategic game to the extensive gamewith perfect information in Figure 1. A strategy for player 2 isa function that assigns an action π
2β {π, β} to each possible
action of player 1, π 1β {π, β}. Thus, the set of strategies for
player 2 is π2= {ππ, πβ, βπ, ββ}, where π
2= π 2ππ 2βrepresents
a possible reaction rule, such that the first component π 2π
denotes the action that will be carried out if player 1 playsπ, and the second component π
2βis the action in case that 1
plays β. The set of strategies for player 1 is π1= {π, β}.
In this sequential game with perfect information a strat-egy profile is (π
1, π 2). Therefore, the utility function π’
π(π 1, π 2)
is defined by π’π(π, π 2ππ 2β) = π’
π(π, π 2π) and π’
π(β, π 2ππ 2β) =
π’π(β, π 2β), based on the 2 Γ 2 payoff matrix presented before.
As the set of strategy profiles becomes wider, the predictionsbased on the Nash equilibrium are less relevant. Thus, inthe conflict game with perfect information the applicableequilibrium concept is the subgame perfect Nash equilibrium(SPNE).
Definition 1 (SPNE). The strategy profile (π β1, π β
2) is a SPNE
in the conflict game with perfect information if and only ifπ’2(π 1, π β
2) β₯ π’2(π 1, π 2) for every π
2β π2and for every π
1β π1;
and π’1(π β
1, π β
2) β₯ π’1(π 1, π β
2) for every π
1β π1.
The strategy π β2= π β
2ππ β
2βrepresents the best response
for player 2 in every subgame. In the same way, the strategyπ β
1is the best response for player 1 when player 2 choosesπ β
2. By definition and using the payoffs assumptions, it is
clear that the strategy π β2= π₯β
2ππ β
2βis the unique weakly
dominant strategy for player 2 and, in consequence, thereason for player 1 to forecast his counterpartβs behaviourbased on the common knowledge of rationality. The forecast
Table 4: SPNE in the conflict game with perfect information.
(π β
1, π β
2) (π’
β
1, π’β
2) Pareto optimal
C1 (π, ππ) (1, 1) YesC2 (π, πβ) (1, 1) YesC3 (β, βπ) (π¦, π₯) YesC4 (β, ββ) (0.25, 0.25) No
possibility leads to a first mover advantage, as we can see inProposition 2.
Proposition 2 (first mover advantage). If (π β1, π β
2) is a SPNE in
the conflict game with perfect information, then π’1(π β
1, π β
2) ΜΈ= π₯
and π’2(π β
1, π β
2) ΜΈ= π¦.
The intuition behind Proposition 2 is that there is anadvantage related to the opportunity of playing first, which isthe idea behind the ordinary commitment. In consequence,the equilibrium that is reached is that in favour of Player 1,because he always obtains at least as much as his opponent.This is true except for the C4 game, because the level ofconflict is so high that regardless of what player 1 chooseshe cannot improve his position. The SPNE for each game ispresented in Table 4.
We can see that the possibility to play a response ruleis not enough to increase player 2βs bargaining power. Forthis reason, we now consider the case where player 2 has thepossibility to announce the reaction rule she is going to play,before player 1makes his decision.
3.2. Threats and Promises as Binding Messages. FollowingSchelling [14], the senderβs bargaining power increases if sheis able to send a message about the action she is going toplay, since with premeditation other alternatives have beenrejected. For the receiver it must be clear that this is theunique relevant option. This strategic move can be imple-mented if it is possible to send bindingmessages about secondmoverβs future actions. With this kind of communication weare going to show that there always exists a message thatallows player 2 to reach an outcome at least as good as theoutcome in the SPNE. By notation, π
2β π2is a conditioned
message, where π2= π2ππ2β. From now on, player 2
represents the sender and player 1 the receiver.
Definition 3 (commitment message). πβ2β π2is a commit-
ment message if and only if π’2(π β
1π, πβ
2) β₯ π’
2(π β
1, π β
2), where
π’1(π β
1π, πβ
2) β₯ π’
1(π 1, πβ
2) for every π
1β π1. It means π β
1πis
player 1 best response givenπβ2.
The idea behind commitment messages is that player 2wants to achieve an outcome at least as good as the onewithout communication, given the receiverβs best response.This condition only looks for compatibility of incentives,since the receiver also makes his decisions in a rational way.Following closely the formulations discussed in Schelling[14], Klein and OβFlaherty [8], and Hirshliefer [7], we classifythe commitment messages in three mutually exclusive cate-gories: warnings, threats, and promises.
Journal of Applied Mathematics 5
Table 5: Commitment messages.
Warning (π’β1, π’β
2) Threat (π’
1, π’2) Promise (π’
1, π’2)
C1 (π, ππ) (1, 1) (π, πβ) (1, 1)C2 (π, πβ) (1, 1) (π, ππ) (1, 1)C3 (β, βπ) (π¦, π₯) (π, ββ) (π₯, π¦) (π, πβ) (1, 1)C4 (β, ββ) (0.25, 0.25) (π, πβ) (1, 1)
Definition 4 (warnings, threats, and promises). (1) The com-mitment messageπβ
2β π2is a warning if and only ifπβ
2= π β
2.
(2) The commitment message πβ2β π2is a threat if and
only if π’2(π,πβ
2) = π’2(π, π β
2) and π’
2(β,πβ
2) < π’2(β, π β
2).
(3)The commitment messageπβ2β π2is a promise if and
only if π’2(π,πβ
2) < π’2(π, π β
2).
The purpose of a warning commitment is to confirm thatthe sender will play her best response after every possibleaction of the receiver. Schelling does not consider warningsas strategic moves, but we prefer to use it in this waybecause the important characteristic of warnings is their fullcredibility condition. If agents want to avoidmiscoordinationrelated to the common knowledge of rationality, they couldcommunicate it and believe it as well. On the contrary,credibility is an inherent problem in threats and promises.The second and third points in Definition 4 show that atleast one action in the message is not the best response afterobserving the receiverβs choice. In threats, the sender does nothave any incentive to implement the punishment when thereceiver plays hawk. In promises, the sender does not haveany incentive to fulfill the agreement when the receiver playsdove.
The strategic goal in the conflict game is to deter the oppo-nent of choosing hawk, because by assumption π’
π(π π, π) >
π’π(π π, β).This is exactly the purpose of these bindingmessages,
as shown in Proposition 5.
Proposition 5 (second mover advantage). If οΏ½ΜοΏ½2is a threat or
a promise in the conflict game with perfect information, thenπ β
1οΏ½ΜοΏ½= π.
The intuition behind Proposition 5 is that, in Schellingβsterms, if a player has the possibility to announce her inten-tions, she will use threats or promises to gain an advantageover the first mover. That is, player 2 uses these messagesbecause, if believed by player 1, she can make him cooperate.
Proposition 6 specifies for which cases player 2 influencesplayer 1βs choices by means of threats and promises. That is,in which cases, when player 1 has no incentives to cooperate,messages can prompt a change in his behaviour.
Proposition 6 (message effectivity). There exists a commit-ment message πβ
2such that π’
2(π β
1π, πβ
2) > π’
2(π β
1, π β
2) if and
only if π¦ > 1.
Therefore, threats and promises provide amaterial advan-tage upon the adversary only in cases with high conflict (e.g.,C3 and C4). Thus, the condition π¦ > 1 is not satisfiedin C1 and C2 cases, where the level of conflict is low.
The implication is that mutual cooperation is achieved inequilibrium and this outcome is the highest for both players.The use of messages under these incentives only needs toconfirm the senderβs rational choice. If player 2 plays πβ =π β
2, receiver can anticipate this rational behaviour, which is
completely credible.This is exactly the essence of the subgameperfect Nash equilibrium proposed by Selten [3].
An essential element of commitments is to determineunderwhat conditions the receivermust take into account thecontent of a message, given that the communication purposeis to change the rivalβs expectations. The characteristic of awarning is to choose the weakly dominant strategy, but forthreats or promises at least one action is not a best response.Proposition 6 shows that in the C3 and C4 cases the senderβsoutcome is strictly higher if she can announce that she doesnot follow the subgame perfect strategy. We summarise thesefindings in Table 5.
Up to this point we have considered the first two elementsof commitment theory. We started by illustrating that themessages sent announce the intention the sender has toexecute a plan of action (i.e., the choice of a response rule).Subsequently, we described for which cases messages areeffective (i.e., self-serving announcements). Now we inquireabout the credibility of these strategic moves, because if thesender is announcing that she is going to play in an oppositeway to the game incentives, this message does not change thereceiverβs beliefs. The message is not enough to increase thebargaining power. It is necessary that the specified action isactually the one that will be played, or at least that the senderbelieves it. The objective in the next section is to stress thecredibility condition. It is clear that binding messages implya degree of commitment at a 100% level, but this conditionis very restrictive, and it is not a useful way to analyse areal bargaining situation. We are going to prove that for asuccessful strategic move the degree of commitment must behigh enough, although it is not necessary to tell the truth witha probability equal to 1.
4. The Conflict Game withNonbinding Messages
The credibility problem is related to how likely it is that themessage sent coincides with the actions chosen. The senderannounces her way of playing, but it could be a bluff. Inother words, the receiver can believe in the message if it ishighly probable that the sender is telling the truth. In orderto model this problem the game now proceeds as follows. Inthe first stage Nature assigns a type to player 2 following aprobability distribution. The senderβs type is her action plan;her way of playing in case of observing each of the possible
6 Journal of Applied Mathematics
receiverβs action. In the second stage player 2 observes hertype and sends a signal to player 1.The signal is the disclosureof her plan, and it can be seen as a noisy message, because itis nonbinding. In the last stage, player 1, after receiving thesignal information, chooses an action.This choice determinesthe playersβ payoffs together with the actual type of player 2.
Following the intuition behind crediblemessage profile inRabin [12], a commitment announcement can be consideredcredible if it fulfills the following conditions. (i) Whenthe receiver believes the literal meanings of the statements,the types sending the messages obtain their best possiblepayoff; hence those types will send these messages. (ii) Thestatements are truthful enough. The enough comes from thefact that some types might lie to player 1 by pooling with acommitmentmessage and the receiver knows it. However, theprobability of facing a lie is small enough that it does not affectplayer 1βs optimal response.
The objective of this section is to formalise these ideasusing our benchmark conflict game. The strategic credibilityproblem is intrinsically dynamic, and it makes sense ifwe consider threats and promises as nonbinding messages.Bearing these considerations in mind, from now on themessages are used to announce the senderβs intentions, butthey are cheap talk. Clearly, negotiators talk, and in most ofthe cases it is free, but we show that this fact does not implythat cheap talk is meaningless or irrelevant.
4.1. The Signalling Conflict Game. Consider a setup in whichplayer 2moves first; player 1 observes a message from player2 but not her type. They choose as follows: In the firststage Nature assigns a type π
2to player 2 as a function that
assigns an action π 2β {π, β} to each action π
1β {π, β}.
Player 2βs type set is Ξ2= π2= {ππ, πβ, βπ, ββ}, where
π2= π 2ππ 2β. Nature chooses the senderβs type following a
probability distribution, where π(π2) > 0 is the probability
to choose the type π2, and β
π2βΞ2
π(π2) = 1. In the second
stage, player 2 observes her own type and chooses a messageπ2β Ξ2. At the final stage, player 1 observes this message
and chooses an action from his set of strategies π1= {π, β}.
The most important characteristic of this conflict game withnonbinding messages is that communication cannot changethe final outcome.Though strategies aremore complex in thiscase, the 2 Γ 2 payoff matrix in the conflict game is always theway to determine the final payoffs.
In order to characterise the utility function we need somenotation. A message profile π
2= (πππ, ππβ, πβπ, πββ) is a
function that assigns a message π2β Ξ2to each type π
2β
Ξ2. The first component π
ππβ π2is the message chosen in
case of observing the type π2= ππ; the second component
ππββ π2is the message chosen in case of observing the type
π2= πβ, and so on. By notation, π
π2
= π 2ππ 2β
is a specificmessage sent by a player with type π
2, andπ
2= (ππ2
, πβπ2
) isa generic message profile with emphasis on the message sentby the player with type π
2.
There is imperfect information because the receiver canobserve the message, but the senderβs type is not observ-able. Thus, the receiver has four different information sets,depending on the message he faces. A receiverβs strategyπ 1π= (π
1ππ, π 1πβ, π 1βπ, π 1ββ) is a function that assigns
an action π 1β π1to each message π
2β π2, where π
1ππis
the action chosen after observing the message π2= ππ, and
so on. In addition, π 1π= (π 1π, π 1(βπ)) is a receiverβs generic
strategy with emphasis on the message he faced. In this case,the subindex π is the way to highlight that the receiverβsstrategies are a profile of single actions. Therefore, in theconflict game with nonbinding messages the utility functionis π’π(π 1ππ2
, π 1(βππ2), ππ2
, πβπ2
) = π’π(π 1, π 2) for π
1ππ2
= π 1and
π2= π 2.
In this specification, messages are payoff irrelevant andwhat matters is the senderβs type. For this reason, it isnecessary to define the receiverβs beliefs about who is thesender when he observes a specific message. The receiverβsbelief πΌ
π2|π2
β₯ 0 is the conditional probability of obtainingthe message from a sender of type π
2, given that he observed
the messageπ2. Naturally, β
π2βΞ2
πΌπ2|π2
= 1.All the elements of the conflict game with nonbinding
messages are summarised in Figure 2. The most salient char-acteristics are the four information sets in which the receivermust choose and that messages are independent of payoffs.For instance, the upper left path (blue) describes each possibledecision for the sender of type ππ. In the first place, Naturechooses the senderβs type; in this case π
2= ππ. In the
next node, ππ must choose a message from the 4 possiblereaction rules. We say that ππ is telling the truth if shechooses π
ππ= ππ, leading to the information set at the
top. We intentionally plot the game in a star shape in orderto highlight the receiverβs information sets. At the end, thereceiver chooses between π and β, and cheap talk implies thatthere are 4 feasible payoffs.
The signalling conflict game has a great multiplicity ofNash equilibria. For this particular setting, a characterisationof this set is not our aim. Our interest lies on the character-isation of the communication equilibrium. For this reasonthe appropriate concept in this case is the perfect Bayesianequilibrium.
Definition 7 (PBE). A perfect Bayesian equilibrium is asenderβs message profile πβ
2= (π
β
ππ, πβ
πβ, πβ
βπ, πβ
ββ), a
receiverβs strategy profile π β1π= (π β
1ππ, π β
1πβ, π β
1βπ, π β
1ββ), and a
beliefs profile πΌβππ |ππ
after observing each message π2, if the
following conditions are satisfied:
(1) πβ2is the argmax
ππ2βΞ2
π’π2
(π β
1π, ππ2
, πβπ2
),
(2) π β1π
is the argmaxπ π2
1βπ1
βπ2βΞ2
πΌπ2|π2
β π’1(π 1ππ2
, π 1(βππ2),
πβ
2),
(3) πΌβπ2|π2
must be calculated following Bayesβ rule basedon the message profile πβ
2. For all π
2who play the
messageπβ2, the beliefs must be calculated as πΌ
π2|πβ
2
=
ππ2
/βππβ
2
.
The conditions in this definition are incentive compatibil-ity for each player and Bayesian updating.The first conditionrequires message πβ
π2
to be optimal for type π2. The second
requires strategy π β1π
to be optimal given the beliefs profileπΌβ
π2|π2
. For the last condition, Bayesian updating, the receiverβs
Journal of Applied Mathematics 7
u1
u2
d
h
d
h
d
h
d
h
x
y
x
y x
y
x
y
N
1, 1
1
1
11
1
1
1
1
1, 1
x, y
x, y
0.25, 0.25
0.25
0.25
0.25
0.25
0.25, 0.25
πΌhhπΌdh
πΌdh
πΌdh
πΌdh
πΌhd
πΌhd
πΌhd
πΌhd
πΌdd
πΌdd
πΌdd
πΌdd
πhh
πdh
πhd
πdd
mhh
mdh
mhd
mdd
d
h
d
h
d
h
d
h
d h d h d h d h
d hd hd hd h
πΌhh
πΌhhπΌhh
u1, u2
y, x
y, x
1, 1
1, 1
x, y
x, y
0.25, 0.25
0.25, 0.25
u1, u2
y, x
y, x
u1
u2 x
y
x
y x
y
x
y
1
1
1
1
0.25
0.25
0.25
0.25
Figure 2: Conflict game with nonbinding messages.
beliefs must be derived via Bayesβ rule for each observedmessage, given the equilibrium message profileπβ
2.
4.2. The Commitment Equilibrium Properties. There are, ingeneral, several different equilibria in the conflict game withnonbinding messages. The objective of this section is to showthat a particular equilibrium that satisfies the followingproperties leads to a coordination outcome, given it is bothsalient and in favour of the sender. In what follows we willpresent Axioms 1 and 2 which will be used to explain whichis the particular equilibrium that can be used as a theoreticalprediction in experimental games with different levels ofconflict.
Axiom 1 (truth-telling beliefs). If the receiver faces a messageοΏ½ΜοΏ½β
2= π2, then πΌ
π2|οΏ½ΜοΏ½2
> 0. If the message π2= π2is not part
of the messages profileπβ2, then πΌ
π2|π2
= 1.
Following Farrell and Rabin [11] we assume that people inreal life do not seem to lie as much or question each otherβsstatements as much, as the game theoretic predictions state.Axiom 1 captures the intuition that for people it is natural totake seriously the literal meaning of a message. This does notmean that they believe everything they hear. It rather statesthat they use the meaning as a starting point and then assesscredibility, which involves questioning in the form of βwhywould she want me to think that? Does she have incentives toactually carry out what she says?β
More precisely, truth-telling beliefs emphasise that inequilibrium when the receiver faces a particular message,its literal meaning is that the sender has the intention ofplaying in this way. Thus, the probability of facing truth-telling messages must be greater than zero. In the sameway, when the sender does not choose a particular message,she is signalling that there are no incentives to make thereceiver believe this, given that the receiverβs best responseis β. Therefore, we can assume that the receiver must fullybelieve in the message, because both players understand thatthe purpose of the strategic move is to induce the receiverto play π. If the sender is signalling the opposite, she isshowing her true type by mistake; then the receiver believesher with probability 1 (see the column βbelief of truth-tellingβin Table 6).
Axiom 2 (sendersβ bargaining power). If πβπ2
is part of themessages profileπβ
2, then π β
1ππ2
= π.
Axiom 2 captures the use of communication as a meansto influence the receiver to play dove. That is, there is anequilibrium where the only messages sent are those thatinduce the receiver to cooperate. In order to characterise acommunication equilibrium such as the one described above,we first focus on the completely separating message profile,when the sender is telling the truth. Naturally,π
π2
is a truth-telling message if and only ifπ
π2
= π2(see column βmessage
by typeβ in Table 6), and given the message the receiverβs best
8 Journal of Applied Mathematics
Table 6: Perfect Bayesian equilibria that satisfy Axioms 1 and 2.
Message by type Player 1βs best resp. Belief of truth-telling(πβ
ππ, πβ
πβ, πβ
βπ, πβ
ββ) (π
β
1ππ, π β
1πβ, π β
1βπ, π β
1ββ) (πΌ
β
ππ|ππ, πΌβ
πβ|πβ, πΌβ
βπ|βπ, πΌβ
ββ|ββ)
C1 (ππ, πβ, ππ, ββ) (π, π, β, π) (πππ
(πππ+ πβπ), 1, 1, 1)
C2 (ππ, πβ, ππ, πβ) (π, π, β, β) (πππ
(πππ+ πβπ),ππβ
(ππβ+ πββ), 1, 1)
C3 (πβ, πβ, ββ, ββ) (β, π, β, π) (1,ππβ
(πππ+ ππβ), 1,πββ
(πβπ+ πββ))
C4 (πβ, πβ, πβ, πβ) (β, π, β, β) (1,ππβ
(πππ+ ππβ+ πβπ+ πββ), 1, 1)
response will be to cooperate (see column βplayer 1βs bestresponseβ in Table 6).
With this in mind, it is possible to stress that a contri-bution of our behavioural model is to develop experimentaldesigns that aim to unravel the strategic use of communi-cation to influence (i.e., manipulate) othersβ behaviour. Thatis, the Nash equilibrium implies that players must take theother playersβ strategies as given and then they look for theirbest response. However, commitment theory, in Schellingβssense, implies an additional step, where players recognise thatopponents are fully rational. Based on this fact, they evaluatedifferent techniques for turning the otherβs behaviour intotheir favour. In our case, the sender asks herself, βThis is theoutcome I would like from this game; is there anything I cando to bring it about?β
Proposition 8 (there is always a liar). The completely truth-telling messages profileπ
2= (ππ, πβ, βπ, ββ) cannot be part of
any PBE of the conflict game with nonbinding messages.
Proposition 8 shows that the completely truth tellingmessage profile is not an equilibrium in the conflict game.The problem lies in the sender type βπ, because revealing heractual type is not incentive compatible and there always existsat least one successful message to induce the counterpart toplay dove. For this reason, we can ask whether there existssome message that induces the sender to reveal her actualtype but at the same time leads to a successful strategic move.Definition 9 is the bridge between nonbinding messages andcommitment messages presented in the previous section.
Definition 9 (self-committing message). Let οΏ½ΜοΏ½βπ2
be a truth-telling message and πΌ
π2|οΏ½ΜοΏ½β
2(π2)= 1. οΏ½ΜοΏ½β
π2
is a self-com-mitting message if and only if π’
π2
(π β
1π, οΏ½ΜοΏ½β
π2
, οΏ½ΜοΏ½β
βπ2
) β₯ π’π2
(π β
1π,
ππ2
, οΏ½ΜοΏ½β
βπ2
), for everyππ2
β Ξ2.
We introduce the self-committing message propertybecause we want to stress that a strategic move is a two-stage process. Not only is communication useful in revealinginformation, but also it can be used to manipulate othersβbehaviour. The sender of a message must consider how thereceiver would react if he believes it and if that behaviourworks in her favour she will not have incentives to lie.A message is self-committing and, if believed, it creates
incentives for the sender to fulfill it [12]. The idea behind athreat or a promise is to implement some risk for the opponentin order to influence him, but this implies a risk for thesender too. This fact has led to associating strategic moveswith slightly rational behaviours, when actually, in order tobe executed, a very detailed evaluation of the consequences isneeded. Proposition 10 and its corollary explain the relationbetween the conditioned messages and the incentives to tellthe truth.
Proposition 10 (incentives to commit). Let οΏ½ΜοΏ½2= οΏ½ΜοΏ½
β
π2
be a commitment message in the conflict game with perfectinformation. If π β
1πβ(π2)= π, then οΏ½ΜοΏ½β
π2
is a self-committingmessage.
Corollary to Proposition 10. If οΏ½ΜοΏ½2is a threat or a promise in
the conflict game with perfect information, then οΏ½ΜοΏ½βπ2
= οΏ½ΜοΏ½2is a
self-committing message.
The intuition behind Proposition 10 and its corollary isthat if a message induces the other to cooperate, then thesender has incentives to tell the truth.Moreover, as illustratedin Proposition 5, threats and promises always induce thecounterpart to cooperate; therefore, they endogenously givethe sender incentives to comply with what is announced.
As we can see in the conflict game with perfect information(for an illustration see Table 5), in the C1 and C2 casesthe warning is the way to reach the best outcome. If weconsider the possibility to send nonbinding messages whenthe senderβs type is equal to awarning strategy, then revealingher type is self-committing. The problem in the C3 and C4cases is more complex given the warning message is not self-committing and the way to improve the bargaining poweris using a threat or a promise. This fact leads to a trade-offbetween choosing a weakly dominant strategy that fails toinduce the opponent to play dove and a strategy that improvesher bargaining power but implies a higher risk for both ofthem.
The required elements for a perfect Bayesian equilibriumat each game are shown in Tables 6 and 7. It is importantto bear in mind that the beliefs that appear in Table 7 arenecessary conditions for implementing the PBE presented inTable 6, given that they satisfy truth-telling beliefs and senderβsbargaining power.
Journal of Applied Mathematics 9
Table 7: Beliefs that support the perfect Bayesian equilibrium.
Warning Threat Promise
C1 πΌβππ|ππβ₯
πΌβ
βπ|ππ(π¦ β π₯)
(1 β π¦)Truth
C2 πΌβπβ|πββ₯
πΌβ
ββ|πβ(0.25 β π₯)
0.75πΌβ
ππ|ππβ₯
πΌβ
βπ|ππ(π¦ β π₯)
(1 β π¦)
C3 Lie πΌβββ|βββ₯
πΌβ
βπ|ββ(π¦ β π₯)
(π₯ β 0.25)πΌβ
πβ|πββ₯
πΌβ
ππ|πβ(π¦ β 1)
0.75
C4 Lie πΌβπβ|πββ₯
πΌβ
ππ|πβ(π¦ β 1) + πΌ
β
βπ|πβ(π¦ β π₯) + πΌ
β
ββ|πβ(0.25 β π₯)
0.75
The problem of which message must be chosen is assimple as follows in the next algorithm: first, the sender tellsthe truth. If the truth-tellingmessage leads the receiver to playdove, then she does not have any incentive to lie. In the othercase, she must find another message to induce the receiverto play dove. If no message leads the receiver to play dove,messages will lack any purpose, and she will be indifferentbetween them.
Table 6 shows the messages; the receiversβ strategies andtheir belief profiles in a particular equilibrium we argue isthe most salient. As we showed above, in the conflict game(see Table 5) the sender is always in favour of those messageswhere the receiverβs best response is dove. In the C1 casethere are three different messages, in the C2 and C3 casesthere are twomessages, and the worst situation is the C4 case,where every type of player sends the same message. This factleads to a first result: if the conflict is high, there are verystrong incentives to lie and communication leads to a poolingequilibrium.
In addition, notice that Table 5 specifies which messageswill be used as commitment messages in the conflict gamewith binding communication illustrated in Figure 1. That is,if credibility is exogenous the theoretical prediction would bethat suchmessages are sent.This means that messages are notrandomly sent, but there is a clear intention behind them, toinduce the receiver to choose the action most favourable forthe sender. Now, Table 7 presents the minimum probabilitythreshold that can make the strategic move successful. Thatis, if credibility is sufficiently high the message works andachieves its purpose, in the conflict game with nonbindingcommunication illustrated in Figure 2.
In Section 3 we assumed that the sender could communi-cate a completely credible message in order to influence hercounterpart. The question is, how robust is this equilibriumif we reduce the level of commitment? Proposition 11 sum-marises the condition for the receiver to choose dove as theoptimal strategy. It is the way for calculating the beliefs thatare shown in Table 7.
Proposition 11 (incentives to cooperate). π β1π(π2)= π if and
only if (1βπ¦)πΌππ|π(π
2)+(0.75)πΌ
πβ|π(π2)+(π₯βπ¦)πΌ
βπ|π(π2)+(π₯β
0.25)πΌββ|π(π
2)β₯ 0.
Based on Proposition 11, the second result is that cheaptalk always has meaning in equilibrium. We consider thatthis equilibrium selection is relevant because the sender
focuses on the communication in the literal meanings of thestatements but understands that some level of credibility isnecessary to improve her bargaining power. Table 7 sum-marises the true enough property of the statements. Here, thereceiver updates his beliefs in a rational way and he chooses toplay dove if and only if it is his expected best response.We caninterpret the beliefs in Table 7 as a threshold, because if thiscondition is satisfied, the sender is successful in her intentionof manipulating the receiverβs behaviour. Thus, some level ofcredibility is necessary, but not at a 100% level.
It is clear that if the conflict is high, the commitmentthreshold is also higher. In C1 and C2 cases the sender mustcommit herself to implement the warning strategy, whichis a weakly dominant strategy. In the C3 case the strategicmovement implies a threat or a promise, formulating anaggressive statement in order to deter the receiver frombehaving aggressively. The worst situation is the C4 case,where there is only one way to avoid the disagreementpoint, to implement a promise. The promise in this game isa commitment that avoids the possibility of exploiting theopponent, because fear can destroy the agreement of mutualcooperation.
In the scope of this paper, threats are not only pun-ishments and promises are not only rewards. There is acredibility problem because these strategic moves imply alack of freedom in order to avoid the rational self-servingbehaviour in a simple one step of thinking. The paradoxis that this decision is rational if the sender understandsthat her move can influence other playersβ choices, becausecommunication is the way to increase her bargaining power.This implies a second level of thinking, such as a forwardinduction reasoning.
5. Conclusions
In this paper we propose a behavioural model followingSchellingβs tactical approach for the analysis of bargaining.In his Essay on Bargaining 1956, Schelling analyses situationswhere subjects watch and interpret each otherβs behaviour,each one better acting taking into account the expectationsthat he creates. This analysis shows that an opponent withrational beliefs expects the other to try to disorient himand he will ignore the movements he perceives as stagingsespecially played to win the game.
The model presented here captures different levelsof conflict by means of a simple parameterisation. In
10 Journal of Applied Mathematics
a bilateral bargaining environment it analyses the strategicuse of binding and nonbinding communication. Our findingsshow that when messages are binding, there is a first moveradvantage. This situation can be changed in favour of thesecond mover, if the latter sends threats or promises in apreplay move. On the other hand, when players have thepossibility to send nonbinding messages, their incentives tolie depend on the level of conflict. When conflict is low, thesender has strong incentives to tell the truth and cheap talkwill almost fully transmit private information. When conflictis high, the sender has strong incentives to bluff and lie.Therefore, in order to persuade the receiver to cooperate withher nonbinding messages, the sender is required to provide aminimum level of credibility (not necessarily a 100%).
In summary, the equilibrium that satisfies truth-tellingbeliefs and senderβs bargaining power allows us to show thatthe less conflict the game has, the more informative theequilibrium signal is, and the less stronger the commitmentneeded to implement it is. Our equilibrium selection is basedon the assumption that in reality people do not seem to lieas much, or question each otherβs statements as much, asrational choice theory predicts. For this reason, the conflictgame with nonbinding messages is a good environment to testdifferent game theoretical hypotheses, because it is simpleenough to be implemented in the lab.
With this in mind, the strategic use of communicationin a conflict game, as illustrated in our model, is the rightway to build a bridge between two research programs:the theory on bargaining and that on social dilemmas. AsBolton [15] suggested, bargaining and dilemma games havebeen developed in experimental research as fairly separateliteratures. For bargaining, the debate has been centred onthe role of fairness and the nature of strategic reasoning.For dilemma games, the debate has involved the relativeweights that should be given to strategic reputation building,altruism, and reciprocity. The benefit of the structure andpayoff scheme we propose is to study all these elements atthe same time. Our model provides a simple frameworkto gather and interpret empirical information. In this way,experiments could indicatewhich parts of the theory aremostuseful to predict subjectsβ behaviour and at the same time wecan identify behavioural parameters that the theory does notreliably determine.
Moreover, the game presented here can be a very usefultool to design economic experiments that can lead to newevidence about bilateral bargaining and, furthermore, abouthuman behaviour in a wider sense. On the one hand, itcan contribute to a better understanding of altruism, self-ishness, and positive and negative reciprocity. A model thatonly captures one of these elements will necessarily portrayan incomplete image. On the other hand, bargaining andcommunication are fundamental elements to understand thepower that one of the parts can have.
In further research, we are interested in exploring theemotional effects of cheating or being cheated on, particularlyby considering the dilemma that takes place when theseemotional effects are compared to the possibility of obtainingmaterial advantages. To do so, it is possible to even considera simpler version of our model using a coarser type space
(e.g., only hawk and dove). This could illustrate the existingrelationship between the level of conflict and the incentivesto lie. As the model predicts, the higher the level of conflictthemore incentives players have to not cooperate, but they arebetter off if the counterpart does cooperate.Therefore, playerswith type hawk would be more inclined to lie and disguisethemselves as cooperators. By measuring the emotionalcomponent of lying and being lied to, we will be able toshow that people do not only value the material outcomes ofbargaining but that the means used to achieve those ends arealso important to them.
Appendix
Proof of Proposition 2. Suppose that π’1(π β
1, π β
2) = π₯ and
π’2(π β
1, π β
2) = π¦; then π¦ β₯ 1. If π β
2= βπ, then π’
1(π, βπ) β₯
π’1(β, βπ) and π₯ β₯ π¦, but by assumption π¦ > π₯. If π β
2= ββ,
then π’1(π, ββ) β₯ π’
1(β, ββ) and π₯ β₯ 0.25, and at the same time
π’2(β, ββ) β₯ π’
2(β, βπ). The only compatible case is 0.25 = π₯,
but by assumption 0.25 ΜΈ= π₯. Therefore, π’1(π β
1, π β
2) ΜΈ= π₯ and
π’2(π β
1, π β
2) ΜΈ= π¦.
Proof of Proposition 5. Let οΏ½ΜοΏ½2be a threat or a promise.
Following Definitions 3 and 4, π’2(π β
1οΏ½ΜοΏ½, οΏ½ΜοΏ½β
2) β₯ π’
2(π β
1, π β
2).
Suppose that π β1οΏ½ΜοΏ½= β; then there are twopossibilities, οΏ½ΜοΏ½β
2= π β
2
or π’2(β, οΏ½ΜοΏ½β
2) β₯ π’2(π β
1, π β
2). If οΏ½ΜοΏ½β
2= π β
2, then by definition οΏ½ΜοΏ½β
2is
neither a threat nor a promise. If π’2(β, οΏ½ΜοΏ½β
2) β₯ π’2(π β
1, π β
2), then
π β
1= π or π β
1= β. If π β
1= π, by assumption π’
2(β, οΏ½ΜοΏ½β
2) <
π’2(π, π β
2). If π β
1= β and οΏ½ΜοΏ½
2is a threat, then π’
2(β, οΏ½ΜοΏ½β
2) <
π’2(π β
1, π β
2). If π β
1= β and οΏ½ΜοΏ½
2is a promise, it must fulfill
π’2(β, οΏ½ΜοΏ½β
2) β₯ π’
2(β, π β
2) and π’
2(π, οΏ½ΜοΏ½β
2) < π’
2(π, π β
2). The C1
and C2 games are not under consideration because π β1= π
and for C3 y C4 cases there are no messages for which theseconditions are true at the same time. Therefore, π β
1οΏ½ΜοΏ½= π.
Proof of Proposition 6. Let us consider the message π2=
πβ. By Proposition 2 we know that π’2(π β
1, π β
2) ΜΈ= π¦, and
by assumption π’1(π, πβ) > π’
1(β, πβ), then π
2= πβ is a
commitment message, because π’2(π, πβ) = 1 β₯ π’
2(π β
1, π β
2).
If π’2(π, πβ) > π’
2(π β
1, π β
2), then 1 > π’
2(π β
1, π β
2), to satisfy this
condition and using Proposition 2 again; we conclude that(π β
1, π β
2) = (β, βπ
β
2β). As π β
2= βπ β
2βand it is part of the SPNE,
then π’2(π, βπ β
2β) > π’2(π, ππ β
2β), and therefore π¦ > 1.
The proof in the other direction is as follows. Let π¦ > 1;then π β
2= βπ β
2β. Using Proposition 2 we know that π’
1(π β
1, π β
2) ΜΈ=
π₯; therefore π β1= β. Now π’
2(π β
1, π β
2) < 1. As we show in
the first part, π2= πβ is a commitment message such
that π’2(π β
1π, πβ) = 1. Therefore, there exists a commitment
message such that π’2(π β
1π, πβ
2) > π’2(π β
1, π β
2).
Proof of Proposition 8. Consider the sendersβ types ππβ=
πβ and πβπ= βπ. If πβ
2is a completely truth-telling
message, then πΌβπβ|πβ= 1 and πΌβ
βπ|βπ= 1. By assumptions
π’1(π, π 1(βπβ), πβ,π
βπβ) = 1 and π’
1(β, π 1(βπβ), πβ,π
βπβ) =
0.25, then π β1πβ= π. In the sameway, π’
1(π, π 1(ββπ), βπ,π
ββπ) =
π₯ and π’1(β, π 1(ββπ), βπ,π
ββπ) = π¦; then π β
1βπ= β. Therefore,
the utility for the sender is π’πβ(π, π 1(βπβ), πβ,π
βπβ) = 1
and π’βπ(π, π 1(ββπ), βπ,π
ββπ) = π₯. These conditions imply
that the sender type βπ has incentives to deviate and π2=
(ππ, πβ, βπ, ββ) cannot be part of any PBE.
Journal of Applied Mathematics 11
Proof of Proposition 10. Let οΏ½ΜοΏ½2= οΏ½ΜοΏ½β
π2
be a commitmentmessage in the conflict game with perfect information andπ β
1πβ(π2)= π. If οΏ½ΜοΏ½β
π2
= οΏ½ΜοΏ½2is not a self-committing mes-
sage, then another message ππ2
must exist such thatπ’π2
(π, π 1(βπβ(π
2)), οΏ½ΜοΏ½β
π2
, πβ
βπ2
) < π’π2
(π β
1π, ππ2
, πβ
βπ2
). Given thepayoff assumptions, π’
π2
(π, π 1(βπβ(π
2)), οΏ½ΜοΏ½β
π2
, πβ
βπ2
) β₯ π’π2
(π β
1π,
πβ
π2
, πβ
βπ2
) for every πβπ2
β Ξ2. Therefore, οΏ½ΜοΏ½β
π2
= οΏ½ΜοΏ½2is a self-
committing message.
Proof of Corollary to Proposition 10. Theproof to the corollaryfollows fromPropositions 5 and 10, and thus it is omitted.
Proof of Proposition 11. The expected utility for each receiverβsstrategy is as follows:
π’1(π,ππ2
) = 1πΌππ|π(π
2)+ 1πΌπβ|π(π
2)+ π₯πΌβπ|π(π
2)+
π₯πΌββ|π(π
2),
π’1(β,ππ2
) = π¦πΌππ|π(π
2)+ 0.25πΌ
πβ|π(π2)+ π¦πΌβπ|π(π
2)+
0.25πΌββ|π(π
2),
therefore, π’1(π,ππ2
) β₯ π’1(β,ππ2
) if and only if (1 βπ¦)πΌππ|π(π
2)+ (0.75)πΌ
πβ|π(π2)+ (π₯ β π¦)πΌ
βπ|π(π2)+ (π₯ β
0.25)πΌββ|π(π
2)β₯ 0.
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper.
Acknowledgments
This paper was elaborated during the authorsβ stay at theUniversity of Granada, Spain. The authors are grateful forthe help and valuable comments of Juan Lacomba, FranciscoLagos, Fernanda Rivas, Aurora GarcΜΔ±a, Erik Kimbrough,Sharlane Scheepers, and the seminar participants at theVI International Meeting of Experimental and BehaviouralEconomics (IMEBE) and the Economic Science AssociationWorld Meeting 2010 (ESA). Financial support from theSpanish Ministry of Education and Science (Grant codeSEJ2009-11117/ECON), the Proyecto de Excelencia (Junta deAndalucΜΔ±a, P07-SEJ-3261), and the Project VIE-1375 from theUniversidad Industrial de Santander (UIS) in Bucaramanga,Colombia, is also gratefully acknowledged.
References
[1] A. Rubinstein, βPerfect equilibrium in a bargaining model,βEconometrica, vol. 50, no. 1, pp. 97β109, 1982.
[2] T. Schelling,The Strategy of Conflict, Harvard University Press,Cambridge, Mass, USA, 1960.
[3] R. Selten, βReexamination of the perfectness concept for equi-librium points in extensive games,β International Journal ofGameTheory, vol. 4, no. 1, pp. 25β55, 1975.
[4] A. Dixit, βThomas Schellingβs contributions to game theory,βScandinavian Journal of Economics, vol. 108, no. 2, pp. 213β229,2006.
[5] R. B. Myerson, βLearning from schellingβs strategy of conflict,βJournal of Economic Literature, vol. 47, no. 4, pp. 1109β1125, 2009.
[6] J. Hirshliefer, On the Emotions as Guarantors of Threats andPromises, MIT Press, Cambridge, Mass, USA, 1987.
[7] J. Hirshliefer, βGame-theoretic interpretations of commitment,βin Evolution and the Capacity for Commitment, R.M.Nesse, Ed.,Russell Sage Foundation, New York, NY, USA, 2001.
[8] D. B. Klein and B. OβFlaherty, βA game-theoretic renderingof promises and threats,β Journal of Economic Behavior andOrganization, vol. 21, no. 3, pp. 295β314, 1993.
[9] V. P. Crawford and J. Sobel, βStrategic information transmis-sion,β Econometrica, vol. 50, no. 6, pp. 1431β1451, 1982.
[10] J. Farrell, βMeaning and credibility in cheap-talk games,βGamesand Economic Behavior, vol. 5, no. 4, pp. 514β531, 1993.
[11] J. Farrell and M. Rabin, βCheap talk,β Journal of EconomicPerspectives, vol. 10, no. 3, pp. 103β118, 1996.
[12] M. Rabin, βCommunication between rational agents,β Journal ofEconomic Theory, vol. 51, no. 1, pp. 144β170, 1990.
[13] S. Baliga and T. SjoΜstroΜm, βArms races and negotiations,βReviewof Economic Studies, vol. 71, no. 2, pp. 351β369, 2004.
[14] T. Schelling, βAn essay on bargaining,βThe American EconomicReview, vol. 46, pp. 281β306, 1956.
[15] G. E. Bolton, βBargaining and dilemma games: from laboratorydata towards theoretical synthesis,β Experimental Economics,vol. 1, no. 3, pp. 257β281, 1998.
Research ArticleIntermediaries in Trust: Indirect Reciprocity,Incentives, and Norms
Giangiacomo Bravo,1 Flaminio Squazzoni,2 and KΓ‘roly TakΓ‘cs3
1Department of Social Studies, Linnaeus University, Universitetsplatsen 1, 35195 VaΜxjoΜ, Sweden2Department of Economics and Management, University of Brescia, Via San Faustino 74B, 25122 Brescia, Italy3MTA TK LenduΜlet Research Center for Educational and Network Studies (RECENS), Hungarian Academy of Sciences,OrszaΜghaΜz Utca 30, Budapest 1014, Hungary
Correspondence should be addressed to Giangiacomo Bravo; [email protected]
Received 2 September 2014; Accepted 4 December 2014
Academic Editor: Vincent Buskens
Copyright Β© 2015 Giangiacomo Bravo et al. This is an open access article distributed under the Creative Commons AttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properlycited.
Any trust situation involves a certain amount of risk for trustors that trustees could abuse. In some cases, intermediaries exist whoplay a crucial role in the exchange by providing reputational information. To examine under what conditions intermediary opinioncould have a positive impact on cooperation, we designed two experiments based on a modified version of the investment gamewhere intermediaries rated the behaviour of trustees under various incentive schemes and different role structures. We found thatintermediaries can increase trust if there is room for indirect reciprocity between the involved parties. We also found that the effectof monetary incentives and social norms cannot be clearly separable in these situations. If properly designed, monetary incentivesfor intermediaries can have a positive effect. On the one hand, when intermediary rewards are aligned with the trustorβs interest,investments and returns tend to increase. On the other hand, fixed monetary incentives perform less than any other incentiveschemes and endogenous social norms in ensuring trust and fairness. These findings should make us reconsider the mantra ofincentivization of social and public conventional policy.
1. Introduction
A trust relationship is an exchange where at least two partiesinteract, that is, a trustor and a trustee, and in which there is acertain amount of risk for the former. If the trustor decides toplace trust, the trustee can honour or abuse it. If honouringtrust is costly, as what happened in one-shot interactions andsometimes even in repeated exchanges, the trustee will haveno rational incentive to be trustworthy. Knowing this, thetrustor is hardly likely to make the first move [1].
Understanding how trust can be established in such hos-tile situations is of paramount importance. One of the mostinteresting sociological explanations suggests that social andeconomic exchanges are embedded in social contexts wherecertain norms and roles have evolved to mediate betweenindividual interests. For instance, intermediaries might act asadvisories and mediators between the parties involved and
reputation or gossip can help to spread information aboutunknown partners that helps trustors to take the risk of inter-action [1β3].
Recent experimental results have shown that individualscan overcome distrust and cooperate more frequently ifbehaviour in the exchange is observed by a third party [4β8]. This happens even when the opinion of a third party hasno consequence on the payoffs of the individuals and repu-tational building strategies are ruled out [9]. This indicatesthat, in many real situations, third parties can reduce infor-mation asymmetry and temptations of free riding, inducemutual trust, and ensure collective benefit. This requiresunderstanding why and under which conditions informationfrom third parties should be trusted by trustors and whattype of incentives canmake judgements or recommendationsby third parties credible to the eyes of trustors. Indeed,first, intermediariesβ opinion is often subjective. Secondly,
Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2015, Article ID 234528, 12 pageshttp://dx.doi.org/10.1155/2015/234528
http://dx.doi.org/10.1155/2015/234528
2 Journal of Applied Mathematics
it is formulated on trusteeβs past behaviour in situationswhere the trustees could have strategically mimicked trust-whorty signals in view of benefits from future trustors. Thistransforms the problem of trust in a βsecondary problemof trustβ [10]. Here, the challenge is to understand underwhat conditions, in anonymous exchanges with unknownpartners, intermediaries could be considered reliable andhow they should be motivated to provide information thatincreases trust.
For instance, let us consider certain important socialand economic interactions, such as the trust relationshipsbetween employees and managers in big companies orbetween two potential partners for a relationship. In thesecases, intermediaries are called on to express their opinionon certain attributes or behaviour, which are crucial to createtrust. Sometimes they do so voluntarily, without receiving anymaterial payoffs, such as someone recommending a friend asa partner to another friend. In other cases, intermediaries arefinancially motivated professionals, such as an HR managerrecommending an employee to be upgraded to his/hercompany manager. Therefore, while in certain spheres ofsocial life, the social function of intermediaries has beeninstitutionalized through material incentives and roles, inother situations informal or voluntary norms have developed.
The aim of our paper was to examine these trust problemsin triadic relations (i.e., between trustors, intermediaries,and trustees) to better understand conditions that couldtransform the intermediary opinion in a trust carrier. Weconducted two experiments where subjects played amodifiedversion of the repeated investment game with intermediariesadded to the typical trustor/trustee dyadic relation. Wemanipulated incentives and roles of intermediaries to testtheir impact on cooperation in particularly hostile condi-tions. For this, we meant a situation where (i) intermediariesformulated their opinion on the trustee behaviour on a lim-ited set of information and (ii) their opinion was not publicand (iii) did not have long-term consequences on thematerialpayoffs of the trustees. In this situation, intermediaries hadonly a limited set of information (i) and bad