cs271 ยป

cs271 Lesson14 - Game Theory



Hey, welcome back. Hope you enjoyed the last lesson. You guys have been doing great. You've been doing amazing work, getting a lot done, doing a really good job of answering the questions. I've been looking at this book here. This is a book from my father's collection. It's called "Introduction to the Theory of Games" by McKinsey. It was published in 1952, And so game theory and AI have kind of grown up together. They've taken different paths, and now they've begun to merge back together. We've talked about games already in a previous lesson. We talked about mostly turn-taking games where 1 player moves and then another moves, and the trick is how to work against an adversary who's trying to maximize his own utility and thus minimize your utility. Game theory handles those types of games, but it also really focuses on games where the 2 moves are simultaneous, or another way to think about them is 1 player moves and then the other moves, but the second player doesn't know what choice the first player made, so it's partially observable. And it's this back and forth of trying to figure out what should I move given what I think he's going to move and what does he think about what I'm going to move that gives game theory its special status. We're going to talk about how that works for AI, and 2 problems are studied. The first is agent design. That is, given a game, find the optimal policy. And the second is mechanism design. That is, given utility functions, how can we design a mechanism so that when the agents act rationally the global utility will be maximized in some way? Let's take a look.

Dominant Strategy Question

We're going to talk about game theory, which is the study of finding an optimal policy when that policy can depend on the opponent's policy and vice versa. And let's look at 1 of the most famous games of all, a game called the "Prisoner's Dilemma." And the story is that there are 2 criminals, Alice and Bob, who have a working relationship, and they're both caught at the scene of a crime, but the police don't quite have enough evidence to put them away. They offer each independently a deal saying "If you testify against your cohort, we'll give you a better deal and give you a reduced sentence time." And Alice and Bob both understand what's going on. They're both perfectly rational, and to understand what the situation is, we draw up a matrix in which we have possible outcomes and possible strategies for each side. For Alice, she has 2 strategies. and the other is to refuse to testify. And Bob has the same choices, to testify against Alice or to refuse. In general, different agents may have different actions available to them. And now we show the payoff to each agent. Sometimes those payoffs are opposite, as in a game like chess where if 1 player gets a +1, the other gets a -1. In this game, the payoffs are not opposite, so it's a non-zero-sum game. And if they both refuse to testify against each other, then neither can be convicted of the major crime, but the police will get them for a lesser crime. And let's say they each serve 1 year in jail, so that's a -1 for each of them. If Alice testifies and Bob refuses, then the police are grateful to Alice, and she gets off with nothing, and Bob gets the book thrown at him and gets a -10 score. Likewise, if the roles are reversed and if both testify against each other, then they're both guilty, and they split the penalty. Now, the question that both Alice and Bob have to face is what is the strategy going to be? And the first concept we want to talk about is the concept of a dominant strategy. A dominant strategy is one for which a player does better than any other strategy no matter what the other player does. And now the question is, does either Alice or Bob have a dominant strategy? If Alice has a dominant strategy, I want you to check that off, either testify or refuse, and similarly, if Bob has a dominant strategy, check that off.

Dominant Strategy Question Solution

The answer is for Alice, testify is a dominant strategy. Let's see. We have to compare it against all possible strategies for Bob. If Bob does testify, then Alice gets -5 here and -10 here, so testify is better. And if Bob does refuse, then Alice gets 0 here and -1 here, so testify is better. Testify is better for Alice no matter what, and by similar reasoning, testify is better for Bob no matter what, so testify is a dominant strategy for both players.

Pareto Optimal Question

The next concept I want to talk about is the concept of a pareto optimal outcome. So, this is talking about outcomes rather than strategies. The strategies are in the margins. The outcomes are in the matrix, and the pareto optimal outcome is one where there's no other outcome that all players would prefer. And this is named after the economist Pareto. What I want you to answer is is there a pareto optimal outcome in this game? Is there an outcome such that there's no other outcome that all players would prefer?

Equilibrium Question

Now, the third concept is the concept of equilibrium. An equilibrium is an outcome such that no player can benefit from switching to a different strategy, assuming that the other players stay the same. And there was a famous result from John Nash, economist, who was shown in the movie and book "A Beautiful Mind" proving that every game has at least 1 equilibrium point. The question here is which, if any, of these outcomes are equilibriums in this game?

Equilibrium Question Solution

And the answer is only this outcome, with A = -5, B = -5, is an equilibrium point because if A switches, it gets -10. If B switches, it gets -10. Neither player wants to switch away from keeping with that strategy. Over here, the Pareto optimal solution is not an equilibrium point because if B switches, it will do better, and A will do worse. This is where the game turns out to be a dilemma because there's an equilibrium point that it seems like if both players are rational, they're bound to end up in this outcome, whereas the Pareto optimal solution is over here in the other corner. And yet, being rational, neither Alice nor Bob can see a way to get to this preferred outcome.

Game Console Question 1

Let's try another example. This one is called the Game Console Game, and the story is that there is a game console manufacturer called Acme, and it has to decide whether its next console is going to play Blu-ray discs or DVD discs. And then there's a game manufacturer called Best, and they similarly have to decide whether to put out their next game on Blu-ray discs or DVD discs. And the payoffs are if they're both on Blu-ray, A gets a +9, and B is also a +9. If they both choose to go with DVD, it's not quite as lucrative. A gets a +5. B gets a +5. And if they disagree, then they'll be in trouble, and they'll take losses. A gets a -4, and B gets a -1, while here A = -3 and B = -1. The first question is is there a dominant strategy? And is there one for A? Click here if yes. And is there one for B? Click here if yes, and if there's none at all, click here. There may be both A and B. It's your choice. And then the next question is is there an equilibrium? Click on any of these 4 outcomes to indicate whether there's an equilibrium.

Game Console Question 1 Solution

The answers are that there's no dominant strategy because for each player, what's best depends on what the other player does. They do best if they match, and so you can't figure out what your own best strategy is unless you know what the other player is going to play. In terms of equilibrium, there's 2 equilibrium points, the +9/+9 and the +5/+5. Both of them are equilibriums because neither of the players can benefit from switching to the other strategy.

Game Console Question 2

And now the next question is is there 1 or more Pareto optimal outcomes? Click on any of the outcomes that you think are Pareto optimal.

Game Console Question 2 Solution

And the answer is that there's just 1. The +9/+9 is Pareto optimal. Both players would rather be there than anyplace else. And so it seems that if both players are rational they'll both know that there are 2 equilibrium points, but only 1 of them is Pareto optimal. And even though there isn't a dominant strategy, they can both arrive at that happy conclusion.

2 Finger Morra

So, we've seen that it's easy to figure out the solution to a game if there's a dominant strategy or if there's a Pareto-optimal equilibrium. Now let's look at a harder game for which such solutions don't exist. This game is called Two Finger Morra, and it's a betting game, and we're going to show a simplified version of it. Again, we have a simple 4-state outcome matrix, and there are 2 players called even and odd. And they both simultaneously show either And then if the result of the total number of fingers is even, then the even player wins that many dollars from the odd player. And if the total number of fingers is odd, then the odd player wins that number of dollars from the even player. So, if 1 and 1 is 2, so that's even, so even gets +2, and we won't bother writing odd getting -2 because it's a zero-sum game, and it will always be the opposite. Similarly, 2 and 2 is 4, so even gets +4 and odd gets -4. and pays it to odd and similarly up here. Now, there's no dominant strategy, and it seems kind of tricky to figure out what the right strategy is. We're going to need more complicated techniques, and it turns out that there is no single move that's the best strategy for either player. But there is what's called a mixed strategy, so a single strategy of always playing one or the other is called a pure strategy, and a mixed strategy is when you have a probability distribution over the possible moves.

Tree Question

Now, since it seems complicated to solve this game in this form, one way we can address it is to change from this matrix form into the familiar tree form. We'll move this over here, and we'll draw it as a game tree. Max will be the even player, and min will be the odd player, and for the moment, let's look at the game of what would happen if max had to go first rather than having them move simultaneously. So, max would make a move either 1 or 2. And then min--so max is even and min is O-- would also make the move, 1 or 2, 1 or 2. And then the outcome in terms of E would be 2 here -3 here, -3 here and 4 here. And now what does min do? Well, try to minimize. So, we choose 2 here, so this node would be -3. We'd choose 1 here, so this node would be -3, and then E tries to maximize. It doesn't matter what he chooses, and we get a -3 up here. So, that's giving E the disadvantage of having to reveal his or her strategy first. What if we did it the other way around? Let's take a look at that. What if O had to go first and reveal a strategy of 1 or 2 and then E as the maximizing player goes second and does a 1 or 2? And then we have these 4 terminal states here, and I want you to fill in the values of the 4 terminal states taken from the table and the intermediate states or the higher up states in the tree as well.

Tree Question Solution

And the answer is 1 + 1 is 2, and so that's even, so it'd be a positive payoff to E. Similarly, 2 +1 is 3, which is odd. So, -3, 2 + 2 is 4. That's a positive payoff. Now E is maximizing, so E would prefer 2 here and would prefer 4 here. And now O is minimizing, so O would prefer 2 here. And notice what we've done here is that we're trying to figure out what the utility of the game is to E, and the true game, both players move simultaneously. Over here, we've handicapped E. And over here, we handicapped O. The true value of the game must be somewhere in between there, so we can say that the utility to E must be less than or equal to 2, which is the value here, and greater than or equal to -3, which is the value here. We've narrowed it down to some degree, but we still haven't nailed down exactly what the utility of the game is.

Mixed Strategy

Now, 1 reason there's such a wide discrepancy in the outcomes of these 2 versions of the game is that we handicapped E and O so severely that here E had to reveal his entire strategy, whether he's going to play 1 or 2 all the time, and the same thing for O over here. What if we could think of a way where we didn't handicap them quite as much, where they weren't giving away quite as much information? Let's look at a way to do that. Let's look at the situation where E goes first and has to reveal the strategy, but instead of having to reveal my strategy is to play 1 or to play 2, what if E says "Well, my strategy is with probability P, I'm going to play 1." "And with probability 1 - P, I'm going to play 2." And that's called a mixed strategy. So, E would announce that strategy for some number P. And there could be an infinite number of possibilities, so we should be drawing an infinite number of branches out of this decision point for all the possibilities for values of P that E would come up with. But instead, I'm just going to sort of parameterize that and just draw 1 line coming out. And now O as the minimizing player has to make a choice between 1 and 2, and what are the outcomes? Well, if P was 1, then 1 + 1 is 2, so with probability P, we get an outcome of 2. That's 2P, but if we choose 2, the probability 1 - P, then 2 + 1 is 3, so with probability 1 - P, we get a -3. So, 2P - 3 times 1 - P would be the outcome for this day. And then the outcome over here would be -3P + 4 times 1 - P. That's the parameterized outcome given the parameterized strategy. And we could do the same thing on the other side. What if O had to go first? With probability Q, O plays 1, and with probability 1 - Q plays 2. Then even is the maximizer and we get 2Q - 3(1 - Q) and -3Q + 4(1 - Q).

Solving The Game

Now, what value should E choose for P? Remember, you've got an infinite number of choices for any value for P. Well, if E chose a value of P such that this value here was larger than this value here, then O would know to always play 1, and similarly, if this value was larger, then O would know to always play 2. So, it seems that what E wants to do is choose the value of P such that these 2 are equal. So, how much is that? Well, let's see. This is 2P - 3 - P. That's 5P - 3, and we want to set that equal to -3 + 4 - P. That's -7P + 4. And let's gather the terms together. So, that would be 12P = 7 or P = 7/12. So, if E chooses the value of P = 7/12, so 7/12 of the time play 1, then O doesn't know what to do. No matter whether he chooses 1 or 2, he gets the same result. And you can do the same calculation over here, and it turns out that Q also equals 7/12. Now, let's take this strategy of P = 7/12, 1, and feed it back into the matrix for the game, and if E plays this strategy of 7/12, 1, 5/12, 2, then no matter what the strategy O plays, the value of the game to E, the utility to E is -1/12. And then we can do the same computation over here. If Q has the strategy 7/12, 1, and 5/12, 2, then we plug that back into here, and no matter what strategy E chooses, the value there is also -1/12. And so now we've shown that the utility to E is greater than or equal to -1/12 and less than or equal to -1/12. In other words, the utility to E is exactly -1/12, so we've solved the game.

Mixed Strategy Issues

Now, the introduction of mixed strategy brings us some curious philosophical problems related to the idea of randomness, secrecy, and rationality. We said that sometimes the rational strategy can be a mixed strategy. That is, ones with probability in it. Probability P, I do action A, and with probability 1 - P I do action B. And that suggests that we need some secrecy so that our opponent doesn't know which of these random choices we're making. The curious thing is that that's only true at the extent of an individual play, not to the extent of the strategy itself. So, if this is the optimal strategy, a mixed strategy, it's okay for us to reveal that strategy to our opponent because our opponent can also compute that that's our rational strategy, and so we won't do any worse by revealing to the opponent exactly what our strategy is. However, the actual implementation of that strategy, that is, this is the grand strategy, that in this situation, whenever we're faced with playing this game, this is what we'll do, that part can be revealed, but the actual choice that this time we're going to choose A or we're going to choose B, of course, that has be to kept secret. If we reveal that, if our opponent can somehow discover which choice we're going to make based on this random choice, then our opponent can get an advantage over us. Now, with respect to rationality, we said that a rational agent is one that does the right thing, and that's still true. However, it turns out that there are games in which you can do better if your opponent believes you are not rational, and that has been said about various politicians throughout history, and I won't pick on one or another. But sometimes it has been said that they are intentionally cultivating an image of being crazy so that they can gain an advantage when faced with certain games with opponents. For example, suppose 1 action available to a leader is to go to war, but both sides realize that the strategy of going to war is dominated by other strategies and thus would be irrational. So, a leader who is perceived to be rational and makes a threat of "Give me this concession, or I'll go to war against you," that's not a credible threat. The leader's threat would be dismissed, and it would have no effect. However, if the leader can convince the opponent that he is irrational or crazy, then the threat suddenly becomes credible. And so note that being irrational doesn't help, but appearing irrational can gain you an advantage.

2x2 Game Question 1

Now, let's give you a chance to solve a game. This will be another 2x2 game, and let's just go ahead and call the players max and min, and they each have 2 moves, and we'll make this be a zero-sum game. We'll just show the value to max, and the value to min will be the negation of that. And what I want you to do to solve the game is tell me what the strategy should be in terms of fill in these blanks. What are the percentages that min should play 1 or 2? And in these blanks, the percentages for max to play 1 or 2. And then tell me the final value for the game. The utility or expected value to max equals what?

2x2 Game Question 1 Solution

We see in this game each player has a dominant strategy. For max, if he plays strategy 1, then that's better than playing strategy 2. If min plays 1, then strategy 1 is also better than strategy 2. If min plays 2, so strategy 1 is the dominant strategy, and that should have probability 1. Strategy 2 should have probability 0. And it's the same thing for min, that 2 minimizes better than 1 in both cases. So, strategy 2 should have probability 1, and strategy 1 should have probability 0, and that means we're always going to end up with this outcome, and the value of the game is 3.

2x2 Game Question 2

Now, that last one was easy. Let's do one more. Here we have a game, and the payoff to max is 3, And I want you to tell me what the strategy is, whether it's pure or mixed. What are the probabilities that max should play 1 and 2, and what are the probabilities that min should play 1 and 2? And what is the value of the game to max?

2x2 Game Question 2 Solution

So, in this case, there's no dominant strategy, so we'll have to go to a mixed strategy. And we'll start by looking at max and saying he has a mixed strategy with a probability P of playing 1, so then if min chooses 1, then we'll have the outcome 3P + 5(1 - P). And we want to set that equal to the outcome if min plays 2, which is 6P + 4(1 - P). And we solve that, and that works out to P = 1/4. So, P, that was a probability of max playing 1. That should be 1/4, which leaves 3/4 over here. And now we go at it from min's direction, and if min has a probability of Q of playing 1, then we want to set 3Q + 6 (1 - Q) equals 5Q + 4 (1 - Q). And you solve that, and you get Q = 1/2. So, 1/2 and 1/2. And then the utility of the game is the expected value, so we look at all the outcomes and the probability of each outcome. So, 3 times 1/8, because it's 1/4 times 1/2 would be the probability, so 3 times 1/8

  • 6 times 1/8 + 5 times 3/8
  • 4 times 3/8, and that works out to 4.5.

Geometric Interpretation

Here's a geometric interpretation that may help you understand a little better what's going on. Here we've gone back to the two-finger Morra game. Now, remember we looked at the two possibilities of E going first and revealing a strategy of playing one with probability P and two with probability of 1 - P. Now O has a choice of what to do, and O wants to minimize. If O chooses one, he'll be somewhere along this line corresponding to this strategy for different values of P. This graph here is showing the utility to E for different values of P that P chose in P's strategy. Now since O can achieve the minimum since O gets to chose the strategy of doing one or two, O can be anywhere on this frontier. It makes sense to E to push that up at high as possible since H is the maximizer. E will choose this point here, which turns out to be at P = 7/12. The same argument going on this side. Here O has gone first and chosen the strategy, q:one and (1 - q):two. Now E has a choice of what to do. E can either choose two and be along this line, depending on what the value of q is, or can chose one, which will be along this line. It would make sense for O, who is trying to minimize, to get this frontier down as low as possible. O would choose the value of q that puts us right here at this point. It turns out that that also is q = 7/12. We see that each side is trying to maximize or minimize, and we end up at a distinguished point that's the intersection of their two strategies.


Now so far we've dealt only with games that take a single turn-- that is there are two players, they both simultaneously reveal their move, and the game is over. But game theory can also deal with more complex games that have multiple rounds of turn taking. Here I'm describing a simple game of poker, the simplest type of poker you've probably ever seen. The deck only has four cards. One card is dealt to each player. There are two rounds. In the first, player one has a choice to either raise--to bet a dollar--or to check. Then in the second round, the second player has the chance to call-- to say I want to see what's up--or to fold. Now this format begins to look very much like the game tree that we talked about in the previous lesson. It starts out and there's a chance node. This corresponds to dealing the cards with the 1/6 that the first player gets an Ace and the second player gets an Ace. One-third that the first player gets and Ace and the second Player gets a Kind, and so on. There there are maximizing nodes and minimizing nodes. What this format, which is known as the sequential game format, is especially good at is keeping track of the belief states of the possibilities of what each agent knows and doesn't know. The tree as a whole describes everything that's going on, but each agent doesn't know at which point in the tree they are. So if you're agent number one, you know that you have an Ace, so you know you're in one of these two states denoted by the dotted lines. You're either in the state where you have an Ace and the other player has an Ace, or in the state where you have an Ace and the other player has a King, But you don't know which one you're at. Similarly, over here there is confusion for the second player as to what state they're in. Now, we can solve this game using this game tree approach, and it's not quite the same as the max and the min approach, because where you are in the states, what you know about the partial information, affects your strategy in a way that we haven't dealt with before. One possibility for how you can evaluate again like this is just to convert it to the other form. The form we've seen before is called the normal form or matrix form. This is the sequential game in extensive form. If we convert the extensive form, we get something like this. Here for each player, we've denoted by a two-letter strategy what you should do when you have an Ace and what you should do when you have a King. So we end up with an exponentially large search space, but here the game was so simple, that it ends up being rather small, and the game is rather trivial, and you can solve it. It turns out that there are two equilibrium corresponding to the strategy for player two, which is he should check when he has an Ace, and he should fold when he has a King, and strategy for player one is it doesn't matter if he raises or checks when has an Ace, but he should check when he has a King. That would give the game a value of zero. Now this works fine for the simple version of poker. For real poker, this table would have about 10^18 states, and it would be impossible to deal with. So we need some strategies for getting back down to a reasonable number of states.

Game Theory Strategies

One of the best strategies is to try abstraction. Instead of dealing with every single possible state of the game, we can take similar states and deal with them as if they were the same. For example, in poker one abstraction that works pretty well is to eliminate the suits. If no player is trying to get a flush, then we can treat all four Aces as if they were identical rather than treating the four of them as being different and similarly with all the other face values. Another thing we can do is lump similar cards together. Rather than saying that 2, 3, 4, and 5 are all different values, if I know that I'm holding a pair of 10s then I can think of the other players' cards as being equal to 10, lower than 10, or higher than 10. Otherwise, lump the same. Similarly, I can lump bets together. Rather than thinking of every dollar amount of a bet from $1 to the upper limit, I can lump the bets into small, medium, and large. Then finally another way to do abstraction is rather than considering every possible deal of all the cards, I can just consider a small subset of the deals to do Monte Carlo sampling over the possible deals, rather than considering them all. This approach extensive games can handle quite a lot in terms of dealing with uncertainty, dealing with partial observability, dealing with multiple agents, stochastic, sequential, dynamic. But there's a few things they can't handle very well. They aren't very good at unknown actions. We need to know what all the actions are for either player before we can define the game. Game theory doesn't deal very well with continuous actions, because we have this matrix-like form. It doesn't deal very well with irrational opponents. We can know that we're going to do the best we possibly can against a rational opponent, but it doesn't tell us how to exploit our opponent's weakness if he turns out to be irrational. Then finally, it doesn't deal with unknown utilities. If we don't know what it is we're trying to optimize, game theory isn't going to tell us how to do it.

Fed Vs Politicians Question

This exercise describes a game played between the federal reserve board and politicians. Now the politicians have a choice whether they want to contract fiscal policy, expand it, or do nothing, and the Fed has the same three choices. Each party has preference for what outcome they would like to see. Here we've ranked them for each party from 1 being the worst outcome to 9 being the best outcome. What I want you is find the equilibrium point for this game. There will be one equilibrium point. I want you to find it. The equilibrium point defines a pure strategy for each player. Tell me the pure strategy for the Fed. It is contract, do nothing, or expand? Click on the right box here. Similarly, for the politicians, click on the right box for their strategy, which leads to the equilibrium point. Then tell me the outcome for the game for each player for that equilibrium point. Then tell me if the equilibrium is Pareto optimal.

Fed Vs Politicians Question Solution

Now, we could determine the equilibrium point by examining all 9 of the outcomes and checking each one to see if both parties do no better by switching. But instead, I'm going to show an alternative method to analyze games, which is to look for dominated strategies. There are no dominant strategies here, but there are dominated strategies. For example, for the politician, the strategy of contracting is domininated by the strategy of doing nothing. To the politician, 2 is greater than 1, 5 is greater than 4, and 9 is greater than 6. We can say that this strategy is dominated, and we can take it out of consideration. Now, how does that help? Well, now in the other direction, we do have a dominant strategy that we didn't have before. Now for the Fed, the option of contracting gives them 8, which is better than 5 or 4, or 3 which is better than 2 and 1. This a dominant strategy for the Fed, and we can mark that off. Now for the politicians, they know they're going to be in this column, and they have a choice of getting a 2 or a 3, The 3 would be the strategy for the politicians. That leads us to this Nash equilibrium point, and the values of that outcome are 3 for each party. Is that Pareto optimal? Actually, it's more like Pareto pessimal in that this is worst total. Out of all these outcomes the total is only 6 as opposed to every other one is better. To answer the question specifically is it Pareto optimal, the answer is no, because any of these four would be better for both parties. That may tell you something about our political system. Next time you get an outcome that you don't like, don't assume that the players are irrational. Just assume that that's the way the game was set up.

Mechanism Design

Now let's switch to the other part of game theory, which remember we called mechanism design. It could really be called game design. The idea is that someone is going to be running a game that players are going to be participating in. We want to design the rules of the game such that we get a high outcome or a high expected utility for the people that run the game, for the players who play the game, and for the public at large. Here's an example of a game. This is the advertising game. Here I've shown it on an Internet search engine, where you do a search, and then ads show up, sometimes at the top, sometimes at the right, sometimes at the bottom of the page, depending on the mechanism. This is also done at sites like eBay that sell items, and there's lots of places where auctions are run. The idea of mechanism design is to come up with the rules of the auction that will make it attractive to bidders and/or people who want to respond to the ads, and make a good result for all. Now, one property that you would like an auction to have is to attract more bidders to make it a more competitive market, and you could attract more if it's less work for them. It's easier for the bidders if they have a dominant strategy. You saw how hard it was to work out the value of a game when you didn't have a dominant strategy, and how easy it is to work it out if you did. If you want to save everybody a lot of trouble, design the game so that dominant strategies exist. These strategies have various names in auctions. Sometimes you call it an auction strategy proof if you only need to know your own strategy. You don't have to think about what all the other people are going to be bidding. They also call that truth revealing or incentive compatible.

Auction Question

Let's examine a type of auction called the second price option. This is popular in various internet search and auction sites. The way it works is that we have a line of possible prices-- higher prices at the top--and bids come in. Different players can bid whatever they want, and whoever bids the highest is the winner, but the price that they pay is the price of the second highest bidder. Now let's say you're participating in this auction, and something is for sale, and you place a value on that. We'll call that value "V", and say V is here. Your bid we'll call "b", and the highest other bid we'll call "c." Now the payoff to you if your bid is higher than all the others then the payoff is you get the value of the auction, because you won the item, and you get V, but you have to pay the second highest price, which is c. You get b minus c. Otherwise, you lose the auction. You don't get anything, and you don't pay anything. The value to you of the auction is zero. What I want you to do is fill in this chart to look at different strategies for different possible bids. We'll say that the value to you of the item for sale is V equals 10. You have the option of bidding, say, 12, 10, or 8, and we'll consider the cases where the highest other bid is 7, 9, 11, or 13. What I want you to do is fill in this chart with the value to you of this game according to your strategy and the strategies of the other players. Tell me if one of these strategies is a dominant strategy. Then tell me is that dominant strategy, if there is one, a truth revealing strategy? I should have one note about dominance. When we talked about it before, we glossed over the possibility of ties. If some policy is better everywhere than any other policy, then we say that that policy strictly dominates the others. On the other hand, if there are some ties and some places where its better but none where it's worse, then we say it weakly dominates. Either way, it's a case of dominance. Now I'll do the first entry to get you started. If you bid 12 and the highest other bid is 7, then you have the high bid, so you win. It's a second-price auction, so you pay 7. The value of the goods is 10, so the total value of the outcome is 10 minus the cost of 7 for 3. I want you to fill in the rest.

Auction Question Solution

Here are the answers. We can see that the strategy of bidding 10, the true value, is weakly dominant. It's the same here, but it's better in these two cases. Let's look at these cases a little bit more carefully and figure out what's going on. If you bid 12--so if b was up here--and if c snuck in between the bid and the valuation, then you'd be paying too much. You'd be paying more than the goods are worth. You'd end up with a negative utility. So you don't want to bid more than what it's really worth to you. On the other hand, if you bid down here, and if c snuck in in between what your bid is and what the valuation is, then you've lost the auction, and you get a zero, but you should have won--or it would have been worth your while to win-- because the price still would have been a bargain to you. That says that the rational strategy, the dominant strategy in second-price auction, is to bid your true value and that makes it a truth-revealing auction mechanism.