Abstract

This paper studies the conditions that improve bargaining power using threats and promises. We develop a model of strategic communication, based on the conflict game with perfect information, in which a noisy commitment message is sent by a better-informed sender to a receiver who takes an action that determines the welfare of both. Our model captures different levels of aligned-preferences, for which classical games such as stag hunt, hawk-dove, and prisoner’s dilemma are particular cases. We characterise the Bayesian perfect equilibrium with nonbinding messages under truth-telling beliefs and sender’s bargaining power assumptions. Through our equilibrium selection we show that the less conflict the game has, the more informative the equilibrium signal is and less credibility is necessary to implement it.

1. Introduction

Bargaining power refers to the relative ability that a player has in order to exert influence upon others to improve her own wellbeing. It is related also to idiosyncratic characteristics such as patience, so that a player turns the final outcome into her favour if she has better outside options or if she is more patient [1]. In addition, Schelling [2] described bargaining power as the chance to cheat and bluff, the ability to set the best price for oneself. For instance, when the union says to the management in a firm, “we will go on strike if you do not meet our demands,” or when a nation announces that any military provocation will be responded with nuclear weapons, it is clear that communication has been used with a strategic purpose, to gain bargaining power.

In bargaining theory, strategic moves are actions taken prior to playing a subsequent game, with the aim of changing the available strategies, information structure, or payoff functions. The aim is to change the opponent’s beliefs, making it credible that the position is unchangeable. Following Selten [3], the formal notion of credibility is subgame perfectness. (Schelling developed the notion of credibility as the outcome that survives iterated elimination of weakly dominated strategies. We know that, in the context of generic extensive-form games with complete and perfect information, this procedure does indeed work (see [4]).) Nevertheless, we argue that if a message is subgame perfect, then it is neither a threat nor a promise. Consider the following example: a union says to management: “If you increase our salaries, we will be grateful.” In such case, credibility is not in doubt, but we could hardly call this a promise or a threat. Schelling [2] denotes fully credible messages as warnings; and we follow this differentiation to threats and promises.

Commitment theory was proposed by Schelling [2] (for a general revision of Schelling’s contribution to economic theory, see Dixit [4] and Myerson [5]), who introduced a tactical approach for communication and credibility inside game theory. Hirshliefer [6, 7] and Klein and O’Flaherty [8] worked on the analysis and characterisation of strategic moves in the standard game theory framework. In the same way, Crawford and Sobel [9] formally showed that an informed agent could reveal his information in order to induce the uninformed agent to make a specific choice.

There are three principal reasons for modelling preplay communication: information disclosure (signalling), coordination goals (cheap-talk), and strategic influence (in Schelling’s sense). Following Farrell [10] and Farrell and Rabin [11], the main problem in modelling nonbinding messages is the “babbling equilibrium,” where statements mean nothing. However, they showed that cheap talk can convey information in a general signalling environment, displaying a particular equilibrium in which statements are meaningful. In this line, Rabin [12] developed credible message profiles, looking for a meaningful communication equilibrium in cheap-talk games.

Our paper contributes to the strategic communication literature in three ways. First, we propose a particular characterisation of warnings, threats, and promises in the conflict game with perfect information, as mutually exclusive categories. For this aim, we first define a sequential protocol in the conflict game originally proposed by Baliga and Sjöström [13]. This benchmark game is useful because it is a stylised model that captures different levels of aligned-preferences, for which classical games such as stag hunt, hawk-dove, and prisoner’s dilemma are particular cases.

Second, we model strategic moves with nonbinding messages, showing that choosing a particular message and its credibility are related to the level of conflict. In this way, the conflict game with nonbinding messages captures a bargaining situation where people talk about their intentions, by simply using cheap talk. More precisely, we analyse a game where a second player (the sender) can communicate her action plan to the first mover (the receiver). (To avoid confusion and gender bias, the sender will be denoted as “she,” and the receiver as “he.”) In fact, the sender must decide after she observes the receiver’s choice, but the commitment message is a preplay move.

Third, we introduce a simple parameterisation that can be used as a baseline for experimental research. By means of this model it is possible to study how, in a bargaining environment, information and communication influence the power one of the parts may have. In other words, this addresses the following: the logic supporting Nash equilibrium is that each player is thinking, given what the other does, what is the best he could do. Players, in a first stance, cannot influence others’ behaviour. On the contrary, Schelling [2] argues that players may consider what they can do, as a preplay move, to influence (i.e., manipulate) the behaviour of their counterpart and turn their payoffs in their favour. Therefore, our behavioural model provides a framework where it is possible to (experimentally) study the strategic use of communication in order to influence others, under different levels of conflict.

We analyse conceptually the importance of three essential elements of commitment theory: (i) the choice of a response rule, (ii) the announcement about future actions, and (iii) the credibility of messages. We answer the following questions: what is the motivation behind threats and promises? and can binding messages improve the senders bargaining power? In this paper, threats and promises are defined as a second mover self-serving announcement, committing in advance how she will play in all conceivable eventualities, as long as it specifies at least one action that is not her best response (see [4, 7]). With this definition, we argue that binding messages improve the sender’s bargaining power in the perfect information conflict game, even when it is clear that by assuming binding messages we avoid the problem of credibility.

The next step is to show that credibility is related to the probability that the sender fulfills the action specified in the nonbinding message. For this, we highlight that players share a common language, and the literal meaning must be used to evaluate whether a message is credible or not. Hence, the receiver has to believe in the literal meaning of announcements if and only if it is highly probable to face the truth. Technically, we capture this intuition in two axioms: truth-telling beliefs and the sender’s bargaining power. We ask, are nonbinding messages a mechanism to improve the sender’s bargaining power? and how much credibility is necessary for a strategic move to be successful? In equilibrium, we can prove that nonbinding messages will convey private information when the conflict is low. On the other hand, if the conflict is high, there are too strong incentives to lie, and cheap talk becomes meaningless. However, even in the worse situation, the nonbinding messages can transmit some meaning in equilibrium if the players focus on the possibility of fulfilling threats and promises.

The paper is organised as follows. In Section 2, the conflict game is described. In Section 3 the conditioned messages will be analysed, and the definitions of threats and promises are presented. Section 4 presents the model with nonbinding messages, showing the importance of response rules, messages, and credibility to improve the sender’s bargaining power. Finally, Section 5 concludes.

2. The Conflict Game

The conflict game is a noncooperative symmetric environment. There are two decision makers in the set of players, . (In this level of simplicity, players’ identity is not relevant, but since the purpose is to model Schelling’s strategic moves, in the following sections player 2 is going to be a sender of commitment messages.) Players must choose an action , where represents being dove (peaceful negotiator) and being hawk (aggressive negotiator). The utility function for player is defined by the payoffs matrix in Table 1, where rows correspond to player 1 and columns correspond to player 2.

Note that both mutual cooperation and mutual defection lead to equal payoffs, and the combination of strategies is always Pareto optimal. In the same way, the combination of strategies is not optimal and can only be understood as the disagreement point. Assuming that , payoffs are unequal when a player behaves aggressively and the other cooperates, given that the player who plays aggressively has an advantage over his/her opponent. In addition, we will assume that and to avoid the multiplicity of irrelevant equilibria. Therefore, it will always be preferred that the opponent chooses . To have a parameterisation that serves as a baseline for experimental design, it is desirable to fix and within these intervals, because if they are modelled as random variables with uniform distribution we would have four games with the same probability of occurring.

Under these assumptions, the conflict game has four particular cases that, according to Hirshliefer [6], can be ordered by their level of conflict or affinity in preferences:(1)Level of conflict 1 (C1): if and , there is no conflict in this game because cooperating is a dominant strategy.(2)Level of conflict 2 (C2): if and , this is the so-called stag hunt game, which formalises the idea that lack of trust may lead to disagreements.(3)Level of conflict 3 (C3): if and , depending on the history used to contextualise it, this game is known as either hawk-dove or chicken game. Both anticipation and dissuasion are modelled here, where fear of consequences makes one of the parts give up.(4)Level of conflict 4 (C4): if and , this is the classic prisoners dilemma, where individual incentives lead to an inefficient allocation of resources.

Based on the system of incentives, it is possible to explain why these games are ordered according to their level of conflict, from lowest to highest (see Table 2). In the C1 game the players’ preferences are well aligned and there is no coordination problem because the Nash equilibrium is unique in dominant strategies. Therefore, a rational player will always choose to cooperate , which will lead to the outcome that is Pareto optimal. In the C2 game mutual cooperation is a Nash equilibrium, but it is not unique in pure strategies. The problem lies in coordinating on either a Pareto dominant equilibrium or a risk dominant equilibrium . In other words, negotiating as a dove implies a higher risk and will only take place if a player believes that the adversary will do the same. This is the reason why it is possible to state that lack of trust between the parties may lead to the disagreement point.

The C3 game portrays an environment with higher levels of conflict, since there are two equilibria with unequal payoffs. In other words, players face two problems, a distributive and a coordination one. If only one of the players chooses to behave aggressively, this will turn the result in his/her favour, but it is impossible to predict who will be aggressive and who will cooperate. In this environment there is no clear criterion to predict the final outcome and therefore the behaviour. The last game is the classical social dilemma about the limitations of rational behaviour to allocate resources efficiently. The C4 game is classified as the most conflictive one because the players are faced with a context where the rational choice clearly predicts that the disagreement point will be reached. Additionally, we will argue along this document that changing incentives to achieve mutual cooperation is not a simple task in this bargaining environment.

Until this moment we have used equilibrium unicity and its optimality to argue that the games are ordered by their level of conflict. However, it is possible to understand the difference in payoffs as a proxy of the level of conflict. In other words, the difference in payoffs between the player who takes the advantage by playing aggressively and the player who is exploited for cooperating is large, we can state that the incentives lead players to a preference to behave aggressively (see the illustrative cases in Table 3).

3. Response Rules and Commitment Messages

We consider now the conflict game with a sequential decision making protocol. The idea is to capture a richer set of strategies that allows us to model threats and promises as self-serving messages. In addition, the set of conditioned strategies include the possibility of implementing ordinary commitment, because a simple unconditional message is always available for the sender.

Schelling [2] distinguishes between two different types of strategic moves: ordinary commitments and threats. An ordinary commitment is the possibility of playing first, announcing that a decision has already been made and it is impossible to be changed, which forces the opponent to make the final choice. On the other hand, threats are second player moves, where she convincingly pledges to respond to the opponent’s choice in a specified contingent way (see [7]).

3.1. The Conflict Game with Perfect Information

Suppose that player moves first and player observes the action made by player 1 and makes his choice. In theoretical terms, this is a switch from the strategic game to the extensive game with perfect information in Figure 1. A strategy for player is a function that assigns an action to each possible action of player 1, . Thus, the set of strategies for player is , where represents a possible reaction rule, such that the first component denotes the action that will be carried out if player plays , and the second component is the action in case that plays . The set of strategies for player is .

In this sequential game with perfect information a strategy profile is . Therefore, the utility function is defined by and , based on the payoff matrix presented before. As the set of strategy profiles becomes wider, the predictions based on the Nash equilibrium are less relevant. Thus, in the conflict game with perfect information the applicable equilibrium concept is the subgame perfect Nash equilibrium (SPNE).

Definition 1 (SPNE). The strategy profile is a SPNE in the conflict game with perfect information if and only if for every and for every ; and for every .

The strategy represents the best response for player in every subgame. In the same way, the strategy is the best response for player when player chooses . By definition and using the payoffs assumptions, it is clear that the strategy is the unique weakly dominant strategy for player and, in consequence, the reason for player 1 to forecast his counterpart’s behaviour based on the common knowledge of rationality. The forecast possibility leads to a first mover advantage, as we can see in Proposition 2.

Proposition 2 (first mover advantage). If is a SPNE in the conflict game with perfect information, then and .

The intuition behind Proposition 2 is that there is an advantage related to the opportunity of playing first, which is the idea behind the ordinary commitment. In consequence, the equilibrium that is reached is that in favour of Player 1, because he always obtains at least as much as his opponent. This is true except for the C4 game, because the level of conflict is so high that regardless of what player 1 chooses he cannot improve his position. The SPNE for each game is presented in Table 4.

We can see that the possibility to play a response rule is not enough to increase player 2’s bargaining power. For this reason, we now consider the case where player has the possibility to announce the reaction rule she is going to play, before player makes his decision.

3.2. Threats and Promises as Binding Messages

Following Schelling [14], the sender’s bargaining power increases if she is able to send a message about the action she is going to play, since with premeditation other alternatives have been rejected. For the receiver it must be clear that this is the unique relevant option. This strategic move can be implemented if it is possible to send binding messages about second mover’s future actions. With this kind of communication we are going to show that there always exists a message that allows player to reach an outcome at least as good as the outcome in the SPNE. By notation, is a conditioned message, where . From now on, player 2 represents the sender and player 1 the receiver.

Definition 3 (commitment message). is a commitment message if and only if , where for every . It means is player 1 best response given .

The idea behind commitment messages is that player wants to achieve an outcome at least as good as the one without communication, given the receiver’s best response. This condition only looks for compatibility of incentives, since the receiver also makes his decisions in a rational way. Following closely the formulations discussed in Schelling [14], Klein and O’Flaherty [8], and Hirshliefer [7], we classify the commitment messages in three mutually exclusive categories: warnings, threats, and promises.

Definition 4 (warnings, threats, and promises). (1) The commitment message is a warning if and only if .
(2) The commitment message is a threat if and only if and .
(3) The commitment message is a promise if and only if .

The purpose of a warning commitment is to confirm that the sender will play her best response after every possible action of the receiver. Schelling does not consider warnings as strategic moves, but we prefer to use it in this way because the important characteristic of warnings is their full credibility condition. If agents want to avoid miscoordination related to the common knowledge of rationality, they could communicate it and believe it as well. On the contrary, credibility is an inherent problem in threats and promises. The second and third points in Definition 4 show that at least one action in the message is not the best response after observing the receiver’s choice. In threats, the sender does not have any incentive to implement the punishment when the receiver plays hawk. In promises, the sender does not have any incentive to fulfill the agreement when the receiver plays dove.

The strategic goal in the conflict game is to deter the opponent of choosing hawk, because by assumption . This is exactly the purpose of these binding messages, as shown in Proposition 5.

Proposition 5 (second mover advantage). If is a threat or a promise in the conflict game with perfect information, then .

The intuition behind Proposition 5 is that, in Schelling’s terms, if a player has the possibility to announce her intentions, she will use threats or promises to gain an advantage over the first mover. That is, player 2 uses these messages because, if believed by player 1, she can make him cooperate.

Proposition 6 specifies for which cases player 2 influences player 1’s choices by means of threats and promises. That is, in which cases, when player 1 has no incentives to cooperate, messages can prompt a change in his behaviour.

Proposition 6 (message effectivity). There exists a commitment message such that if and only if .

Therefore, threats and promises provide a material advantage upon the adversary only in cases with high conflict (e.g., C3 and C4). Thus, the condition is not satisfied in C1 and C2 cases, where the level of conflict is low. The implication is that mutual cooperation is achieved in equilibrium and this outcome is the highest for both players. The use of messages under these incentives only needs to confirm the sender’s rational choice. If player plays , receiver can anticipate this rational behaviour, which is completely credible. This is exactly the essence of the subgame perfect Nash equilibrium proposed by Selten [3].

An essential element of commitments is to determine under what conditions the receiver must take into account the content of a message, given that the communication purpose is to change the rival’s expectations. The characteristic of a warning is to choose the weakly dominant strategy, but for threats or promises at least one action is not a best response. Proposition 6 shows that in the C3 and C4 cases the sender’s outcome is strictly higher if she can announce that she does not follow the subgame perfect strategy. We summarise these findings in Table 5.

Up to this point we have considered the first two elements of commitment theory. We started by illustrating that the messages sent announce the intention the sender has to execute a plan of action (i.e., the choice of a response rule). Subsequently, we described for which cases messages are effective (i.e., self-serving announcements). Now we inquire about the credibility of these strategic moves, because if the sender is announcing that she is going to play in an opposite way to the game incentives, this message does not change the receiver’s beliefs. The message is not enough to increase the bargaining power. It is necessary that the specified action is actually the one that will be played, or at least that the sender believes it. The objective in the next section is to stress the credibility condition. It is clear that binding messages imply a degree of commitment at a 100% level, but this condition is very restrictive, and it is not a useful way to analyse a real bargaining situation. We are going to prove that for a successful strategic move the degree of commitment must be high enough, although it is not necessary to tell the truth with a probability equal to 1.

4. The Conflict Game with Nonbinding Messages

The credibility problem is related to how likely it is that the message sent coincides with the actions chosen. The sender announces her way of playing, but it could be a bluff. In other words, the receiver can believe in the message if it is highly probable that the sender is telling the truth. In order to model this problem the game now proceeds as follows. In the first stage Nature assigns a type to player following a probability distribution. The sender’s type is her action plan; her way of playing in case of observing each of the possible receiver’s action. In the second stage player observes her type and sends a signal to player . The signal is the disclosure of her plan, and it can be seen as a noisy message, because it is nonbinding. In the last stage, player , after receiving the signal information, chooses an action. This choice determines the players’ payoffs together with the actual type of player .

Following the intuition behind credible message profile in Rabin [12], a commitment announcement can be considered credible if it fulfills the following conditions. (i) When the receiver believes the literal meanings of the statements, the types sending the messages obtain their best possible payoff; hence those types will send these messages. (ii) The statements are truthful enough. The enough comes from the fact that some types might lie to player by pooling with a commitment message and the receiver knows it. However, the probability of facing a lie is small enough that it does not affect player 1’s optimal response.

The objective of this section is to formalise these ideas using our benchmark conflict game. The strategic credibility problem is intrinsically dynamic, and it makes sense if we consider threats and promises as nonbinding messages. Bearing these considerations in mind, from now on the messages are used to announce the sender’s intentions, but they are cheap talk. Clearly, negotiators talk, and in most of the cases it is free, but we show that this fact does not imply that cheap talk is meaningless or irrelevant.

4.1. The Signalling Conflict Game

Consider a setup in which player moves first; player observes a message from player but not her type. They choose as follows: In the first stage Nature assigns a type to player as a function that assigns an action to each action . Player 2’s type set is , where . Nature chooses the sender’s type following a probability distribution, where is the probability to choose the type , and . In the second stage, player observes her own type and chooses a message . At the final stage, player observes this message and chooses an action from his set of strategies . The most important characteristic of this conflict game with nonbinding messages is that communication cannot change the final outcome. Though strategies are more complex in this case, the payoff matrix in the conflict game is always the way to determine the final payoffs.

In order to characterise the utility function we need some notation. A message profile is a function that assigns a message to each type . The first component is the message chosen in case of observing the type ; the second component is the message chosen in case of observing the type , and so on. By notation, is a specific message sent by a player with type , and is a generic message profile with emphasis on the message sent by the player with type .

There is imperfect information because the receiver can observe the message, but the sender’s type is not observable. Thus, the receiver has four different information sets, depending on the message he faces. A receiver’s strategy is a function that assigns an action to each message , where is the action chosen after observing the message , and so on. In addition, is a receiver’s generic strategy with emphasis on the message he faced. In this case, the subindex is the way to highlight that the receiver’s strategies are a profile of single actions. Therefore, in the conflict game with nonbinding messages the utility function is for and .

In this specification, messages are payoff irrelevant and what matters is the sender’s type. For this reason, it is necessary to define the receiver’s beliefs about who is the sender when he observes a specific message. The receiver’s belief is the conditional probability of obtaining the message from a sender of type , given that he observed the message . Naturally, .

All the elements of the conflict game with nonbinding messages are summarised in Figure 2. The most salient characteristics are the four information sets in which the receiver must choose and that messages are independent of payoffs. For instance, the upper left path (blue) describes each possible decision for the sender of type . In the first place, Nature chooses the sender’s type; in this case . In the next node, must choose a message from the 4 possible reaction rules. We say that is telling the truth if she chooses , leading to the information set at the top. We intentionally plot the game in a star shape in order to highlight the receiver’s information sets. At the end, the receiver chooses between and , and cheap talk implies that there are 4 feasible payoffs.

The signalling conflict game has a great multiplicity of Nash equilibria. For this particular setting, a characterisation of this set is not our aim. Our interest lies on the characterisation of the communication equilibrium. For this reason the appropriate concept in this case is the perfect Bayesian equilibrium.

Definition 7 (PBE). A perfect Bayesian equilibrium is a sender’s message profile , a receiver’s strategy profile , and a beliefs profile after observing each message , if the following conditions are satisfied: (1) is the ,(2) is the ,(3) must be calculated following Bayes’ rule based on the message profile . For all who play the message , the beliefs must be calculated as .

The conditions in this definition are incentive compatibility for each player and Bayesian updating. The first condition requires message to be optimal for type . The second requires strategy to be optimal given the beliefs profile . For the last condition, Bayesian updating, the receiver’s beliefs must be derived via Bayes’ rule for each observed message, given the equilibrium message profile .

4.2. The Commitment Equilibrium Properties

There are, in general, several different equilibria in the conflict game with nonbinding messages. The objective of this section is to show that a particular equilibrium that satisfies the following properties leads to a coordination outcome, given it is both salient and in favour of the sender. In what follows we will present Axioms 1 and 2 which will be used to explain which is the particular equilibrium that can be used as a theoretical prediction in experimental games with different levels of conflict.

Axiom 1 (truth-telling beliefs). If the receiver faces a message , then . If the message is not part of the messages profile , then .

Following Farrell and Rabin [11] we assume that people in real life do not seem to lie as much or question each other’s statements as much, as the game theoretic predictions state. Axiom 1 captures the intuition that for people it is natural to take seriously the literal meaning of a message. This does not mean that they believe everything they hear. It rather states that they use the meaning as a starting point and then assess credibility, which involves questioning in the form of “why would she want me to think that? Does she have incentives to actually carry out what she says?”

More precisely, truth-telling beliefs emphasise that in equilibrium when the receiver faces a particular message, its literal meaning is that the sender has the intention of playing in this way. Thus, the probability of facing truth-telling messages must be greater than zero. In the same way, when the sender does not choose a particular message, she is signalling that there are no incentives to make the receiver believe this, given that the receiver’s best response is . Therefore, we can assume that the receiver must fully believe in the message, because both players understand that the purpose of the strategic move is to induce the receiver to play . If the sender is signalling the opposite, she is showing her true type by mistake; then the receiver believes her with probability 1 (see the column “belief of truth-telling” in Table 6).

Axiom 2 (senders’ bargaining power). If is part of the messages profile , then .

Axiom 2 captures the use of communication as a means to influence the receiver to play dove. That is, there is an equilibrium where the only messages sent are those that induce the receiver to cooperate. In order to characterise a communication equilibrium such as the one described above, we first focus on the completely separating message profile, when the sender is telling the truth. Naturally, is a truth-telling message if and only if (see column “message by type” in Table 6), and given the message the receiver’s best response will be to cooperate (see column “player 1’s best response” in Table 6).

With this in mind, it is possible to stress that a contribution of our behavioural model is to develop experimental designs that aim to unravel the strategic use of communication to influence (i.e., manipulate) others’ behaviour. That is, the Nash equilibrium implies that players must take the other players’ strategies as given and then they look for their best response. However, commitment theory, in Schelling’s sense, implies an additional step, where players recognise that opponents are fully rational. Based on this fact, they evaluate different techniques for turning the other’s behaviour into their favour. In our case, the sender asks herself, “This is the outcome I would like from this game; is there anything I can do to bring it about?”

Proposition 8 (there is always a liar). The completely truth-telling messages profile cannot be part of any PBE of the conflict game with nonbinding messages.

Proposition 8 shows that the completely truth telling message profile is not an equilibrium in the conflict game. The problem lies in the sender type , because revealing her actual type is not incentive compatible and there always exists at least one successful message to induce the counterpart to play dove. For this reason, we can ask whether there exists some message that induces the sender to reveal her actual type but at the same time leads to a successful strategic move. Definition 9 is the bridge between nonbinding messages and commitment messages presented in the previous section.

Definition 9 (self-committing message). Let be a truth-telling message and . is a self-committing message if and only if , for every .

We introduce the self-committing message property because we want to stress that a strategic move is a two-stage process. Not only is communication useful in revealing information, but also it can be used to manipulate others’ behaviour. The sender of a message must consider how the receiver would react if he believes it and if that behaviour works in her favour she will not have incentives to lie. A message is self-committing and, if believed, it creates incentives for the sender to fulfill it [12]. The idea behind a threat or a promise is to implement some risk for the opponent in order to influence him, but this implies a risk for the sender too. This fact has led to associating strategic moves with slightly rational behaviours, when actually, in order to be executed, a very detailed evaluation of the consequences is needed. Proposition 10 and its corollary explain the relation between the conditioned messages and the incentives to tell the truth.

Proposition 10 (incentives to commit). Let be a commitment message in the conflict game with perfect information. If , then is a self-committing message.

Corollary to Proposition 10. If is a threat or a promise in the conflict game with perfect information, then is a self-committing message.

The intuition behind Proposition 10 and its corollary is that if a message induces the other to cooperate, then the sender has incentives to tell the truth. Moreover, as illustrated in Proposition 5, threats and promises always induce the counterpart to cooperate; therefore, they endogenously give the sender incentives to comply with what is announced.

As we can see in the conflict game with perfect information (for an illustration see Table 5), in the C1 and C2 cases the warning is the way to reach the best outcome. If we consider the possibility to send nonbinding messages when the sender’s type is equal to a warning strategy, then revealing her type is self-committing. The problem in the C3 and C4 cases is more complex given the warning message is not self-committing and the way to improve the bargaining power is using a threat or a promise. This fact leads to a trade-off between choosing a weakly dominant strategy that fails to induce the opponent to play dove and a strategy that improves her bargaining power but implies a higher risk for both of them.

The required elements for a perfect Bayesian equilibrium at each game are shown in Tables 6 and 7. It is important to bear in mind that the beliefs that appear in Table 7 are necessary conditions for implementing the PBE presented in Table 6, given that they satisfy truth-telling beliefs and senders bargaining power.

The problem of which message must be chosen is as simple as follows in the next algorithm: first, the sender tells the truth. If the truth-telling message leads the receiver to play dove, then she does not have any incentive to lie. In the other case, she must find another message to induce the receiver to play dove. If no message leads the receiver to play dove, messages will lack any purpose, and she will be indifferent between them.

Table 6 shows the messages; the receivers’ strategies and their belief profiles in a particular equilibrium we argue is the most salient. As we showed above, in the conflict game (see Table 5) the sender is always in favour of those messages where the receiver’s best response is dove. In the C1 case there are three different messages, in the C2 and C3 cases there are two messages, and the worst situation is the C4 case, where every type of player sends the same message. This fact leads to a first result: if the conflict is high, there are very strong incentives to lie and communication leads to a pooling equilibrium.

In addition, notice that Table 5 specifies which messages will be used as commitment messages in the conflict game with binding communication illustrated in Figure 1. That is, if credibility is exogenous the theoretical prediction would be that such messages are sent. This means that messages are not randomly sent, but there is a clear intention behind them, to induce the receiver to choose the action most favourable for the sender. Now, Table 7 presents the minimum probability threshold that can make the strategic move successful. That is, if credibility is sufficiently high the message works and achieves its purpose, in the conflict game with nonbinding communication illustrated in Figure 2.

In Section 3 we assumed that the sender could communicate a completely credible message in order to influence her counterpart. The question is, how robust is this equilibrium if we reduce the level of commitment? Proposition 11 summarises the condition for the receiver to choose dove as the optimal strategy. It is the way for calculating the beliefs that are shown in Table 7.

Proposition 11 (incentives to cooperate). if and only if .

Based on Proposition 11, the second result is that cheap talk always has meaning in equilibrium. We consider that this equilibrium selection is relevant because the sender focuses on the communication in the literal meanings of the statements but understands that some level of credibility is necessary to improve her bargaining power. Table 7 summarises the true enough property of the statements. Here, the receiver updates his beliefs in a rational way and he chooses to play dove if and only if it is his expected best response. We can interpret the beliefs in Table 7 as a threshold, because if this condition is satisfied, the sender is successful in her intention of manipulating the receiver’s behaviour. Thus, some level of credibility is necessary, but not at a 100% level.

It is clear that if the conflict is high, the commitment threshold is also higher. In C1 and C2 cases the sender must commit herself to implement the warning strategy, which is a weakly dominant strategy. In the C3 case the strategic movement implies a threat or a promise, formulating an aggressive statement in order to deter the receiver from behaving aggressively. The worst situation is the C4 case, where there is only one way to avoid the disagreement point, to implement a promise. The promise in this game is a commitment that avoids the possibility of exploiting the opponent, because fear can destroy the agreement of mutual cooperation.

In the scope of this paper, threats are not only punishments and promises are not only rewards. There is a credibility problem because these strategic moves imply a lack of freedom in order to avoid the rational self-serving behaviour in a simple one step of thinking. The paradox is that this decision is rational if the sender understands that her move can influence other players’ choices, because communication is the way to increase her bargaining power. This implies a second level of thinking, such as a forward induction reasoning.

5. Conclusions

In this paper we propose a behavioural model following Schelling’s tactical approach for the analysis of bargaining. In his Essay on Bargaining 1956, Schelling analyses situations where subjects watch and interpret each other’s behaviour, each one better acting taking into account the expectations that he creates. This analysis shows that an opponent with rational beliefs expects the other to try to disorient him and he will ignore the movements he perceives as stagings especially played to win the game.

The model presented here captures different levels of conflict by means of a simple parameterisation. In a bilateral bargaining environment it analyses the strategic use of binding and nonbinding communication. Our findings show that when messages are binding, there is a first mover advantage. This situation can be changed in favour of the second mover, if the latter sends threats or promises in a preplay move. On the other hand, when players have the possibility to send nonbinding messages, their incentives to lie depend on the level of conflict. When conflict is low, the sender has strong incentives to tell the truth and cheap talk will almost fully transmit private information. When conflict is high, the sender has strong incentives to bluff and lie. Therefore, in order to persuade the receiver to cooperate with her nonbinding messages, the sender is required to provide a minimum level of credibility (not necessarily a 100%).

In summary, the equilibrium that satisfies truth-telling beliefs and senders bargaining power allows us to show that the less conflict the game has, the more informative the equilibrium signal is, and the less stronger the commitment needed to implement it is. Our equilibrium selection is based on the assumption that in reality people do not seem to lie as much, or question each other’s statements as much, as rational choice theory predicts. For this reason, the conflict game with nonbinding messages is a good environment to test different game theoretical hypotheses, because it is simple enough to be implemented in the lab.

With this in mind, the strategic use of communication in a conflict game, as illustrated in our model, is the right way to build a bridge between two research programs: the theory on bargaining and that on social dilemmas. As Bolton [15] suggested, bargaining and dilemma games have been developed in experimental research as fairly separate literatures. For bargaining, the debate has been centred on the role of fairness and the nature of strategic reasoning. For dilemma games, the debate has involved the relative weights that should be given to strategic reputation building, altruism, and reciprocity. The benefit of the structure and payoff scheme we propose is to study all these elements at the same time. Our model provides a simple framework to gather and interpret empirical information. In this way, experiments could indicate which parts of the theory are most useful to predict subjects’ behaviour and at the same time we can identify behavioural parameters that the theory does not reliably determine.

Moreover, the game presented here can be a very useful tool to design economic experiments that can lead to new evidence about bilateral bargaining and, furthermore, about human behaviour in a wider sense. On the one hand, it can contribute to a better understanding of altruism, selfishness, and positive and negative reciprocity. A model that only captures one of these elements will necessarily portray an incomplete image. On the other hand, bargaining and communication are fundamental elements to understand the power that one of the parts can have.

In further research, we are interested in exploring the emotional effects of cheating or being cheated on, particularly by considering the dilemma that takes place when these emotional effects are compared to the possibility of obtaining material advantages. To do so, it is possible to even consider a simpler version of our model using a coarser type space (e.g., only hawk and dove). This could illustrate the existing relationship between the level of conflict and the incentives to lie. As the model predicts, the higher the level of conflict the more incentives players have to not cooperate, but they are better off if the counterpart does cooperate. Therefore, players with type hawk would be more inclined to lie and disguise themselves as cooperators. By measuring the emotional component of lying and being lied to, we will be able to show that people do not only value the material outcomes of bargaining but that the means used to achieve those ends are also important to them.

Appendix

Proof of Proposition 2. Suppose that and ; then . If , then and , but by assumption . If , then and 0.25, and at the same time . The only compatible case is , but by assumption 0.25 . Therefore, and .

Proof of Proposition 5. Let be a threat or a promise. Following Definitions 3 and 4, . Suppose that ; then there are two possibilities, or . If , then by definition is neither a threat nor a promise. If , then or . If , by assumption . If and is a threat, then . If and is a promise, it must fulfill and . The C1 and C2 games are not under consideration because and for C3 y C4 cases there are no messages for which these conditions are true at the same time. Therefore, .

Proof of Proposition 6. Let us consider the message . By Proposition 2 we know that , and by assumption , then is a commitment message, because . If , then , to satisfy this condition and using Proposition 2 again; we conclude that . As and it is part of the SPNE, then , and therefore .
The proof in the other direction is as follows. Let ; then . Using Proposition 2 we know that ; therefore . Now . As we show in the first part, is a commitment message such that . Therefore, there exists a commitment message such that .

Proof of Proposition 8. Consider the senders’ types and . If is a completely truth-telling message, then and . By assumptions and , then . In the same way, and ; then . Therefore, the utility for the sender is and . These conditions imply that the sender type has incentives to deviate and cannot be part of any PBE.

Proof of Proposition 10. Let be a commitment message in the conflict game with perfect information and . If is not a self-committing message, then another message must exist such that . Given the payoff assumptions, for every . Therefore, is a self-committing message.

Proof of Corollary to Proposition 10. The proof to the corollary follows from Propositions 5 and 10, and thus it is omitted.

Proof of Proposition 11. The expected utility for each receiver’s strategy is as follows:,,therefore, if and only if .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This paper was elaborated during the authors’ stay at the University of Granada, Spain. The authors are grateful for the help and valuable comments of Juan Lacomba, Francisco Lagos, Fernanda Rivas, Aurora García, Erik Kimbrough, Sharlane Scheepers, and the seminar participants at the VI International Meeting of Experimental and Behavioural Economics (IMEBE) and the Economic Science Association World Meeting 2010 (ESA). Financial support from the Spanish Ministry of Education and Science (Grant code SEJ2009-11117/ECON), the Proyecto de Excelencia (Junta de Andalucía, P07-SEJ-3261), and the Project VIE-1375 from the Universidad Industrial de Santander (UIS) in Bucaramanga, Colombia, is also gratefully acknowledged.