Perfectly logical beings don't make sense in the real world. Their inputs and outputs can all be precisely determined in a relatively easy manner once you figure out the trick to the puzzle. Real people are much more complicated, though I must admit I am quite intrigued to learn more about what Danielle was talking about in the Facebook thread with regards to actually working those sorts of things out in a non-puzzle sense.
Daniel and Sthenno both brought up what would happen if pirate E broke ranks and voted against the solution 1-0-1-0-98 even though we 'proved' that it's in his best interest to vote yes. In the real world maybe E can do that. He may have some input that we didn't think to account for (maybe he truly thinks one is the loneliest number and would have voted for any split that got him a non-one number) which causes him to behave what we perceive to be 'irrationally'. I'm sure he has a good explanation for what he did. But that would make him a real, complicated person and not a perfectly logical being.
So what happens in the puzzle when E breaks rank? He doesn't. The puzzle world simply doesn't work that way. The puzzle is a little logic problem designed to be solved in a reasonably short period of time with very limited information. Especially in an interview situation you may win bonus points with the interview by thinking 'outside the box' and giving the pirates backstories that cause weird votes. Maybe you'll just make the interviewer annoyed and lose the job. Who knows!
Sthenno also said he'd better hope I'm not actually dividing pirate plunder with him in this way since I'm going to die if I do. Now, I like my friends from University and all but there's no way I'm ever going to let them vote on if I get to live or die. (Can you imagine betting your life that you know how Bung is going to act in any given situation?) But I do play a lot of board games where this sort of decision can come up... Take El Grande, for example. You draft actions in that game. Some actions really impact board position. Other actions moderately impact board position and score points. Often the situation comes up where you can take the point scoring action or you can 'move the king' and set up the point scoring action to be really good for you. The trick is convincing the next guy to score points for you! (He can take the action but not activate it if he thinks it's going to be 'too good' for you.) Is a 10-2-0-0-0 split good enough? 10-4-0-0-0? 6-8-4-0-0? It depends on who you're playing with and the state of the board. Games like that (Modern Art, Dominant Species, and even Carcassonne) are all about putting yourself in a position where other people will score you points but you can't make it so they're only scoring you points or they won't do it.
But I digress. I've put a lot of thought into what happens when E isn't actually a perfectly logical being so I might as well say what I'd do in B's shoes. Here's what I believe to be true...
- D wants to get down to 2 people at this point and likely thinks she can capitalize on E's apparent randomness to get all the loot. She's probably voting against anything I propose and is definitely voting against anything C proposes after I die.
- I don't trust E. For all I know he just likes to see people die and would even vote against a split giving him all the loot. Who needs cash when you can have blood?
- Death is now a very real option for me.
- Death is also a very real option for C. I probably need his vote to not die, but the trick is he probably needs my vote for him to not die as well. Unless he's willing to bargain with E he needs my split to pass.
I've gone back and forth in my head about what I'd actually do. My first guy feeling was actually to propose 0-0-0-100. That's right, all the loot to me. I'd be banking on the fact that C trusts E as much as I do and that he realizes voting against my split means he dies too.
Then I thought that the life of a pirate probably isn't what I want out of life and I just want to survive. The best way to do that is the 0-0-100-0 split. C can have all the loot and we both get to live. I'm guaranteed to survive this time! (I know C isn't a lunatic since he did actually vote for 1 coin when A offered it.)
But then I'm thinking of the ultimatum game... I don't really want to make C angry. It may well go against all that greed stands for, but E threw greed out the window when he killed off A. So rather than make C an offer he simply can't refuse (0-0-100-0) I'm going to make him one he shouldn't refuse. 0-0-50-50. And then we stop sailing around with E.
1 comment:
I think the idea of "purely logical being" simply doesn't work and that it unquestioningly imports a defecting-centric worldview.
Obviously there is more than one way of doing logic, and I think it is safe to assume that we don't think that the purely logical being can question its own axioms (that kind of throws all the purity out the window) but that means that we are the ones that define its axioms.
I think your purely logical greedy beings make decisions as follows: Given a choice, calculate the expected value of all paths from each decision and choose the one with the higher expected value for me.
But that is an axiom, not a provable logical conclusion. Suppose I make my own purely logical being that makes decisions this way: Given a choice, calculate the expected total number of points added to the entire game for each path, and choose the one that puts more points in the game, regardless of whether they go to me or not.
You may claim that method of logic is not "greedy" but here is the thing: if we are playing prisoner's dilemma, then my axiom generates three times as many points as yours - who is greedy now? The fact that your axiom beats mine head-to-head does not prove anything. Lots of games have bad strategies that beat good strategies head-to-head (paper, for instance).
And you may say that my axiom in this case only works if I can guarantee it on both sides, but that isn't true either. That's where Newcomb's paradox comes back in. In Newcomb's paradox, from the box-chooser end you are playing prisoner's dilemma exactly (taking only box B is cooperating from your end and putting the money in the box is cooperating from their end) but your opponent has a different scoring system. Assuming they care most about being right and secondarily about the money, their best possible result is that you both defect, and one of the stipulations of the game is that they know your axioms and your system of logic (this is not an unfair stipulation, the pirate game uses it too). My axiom of maximizing the total pool of points, again, gives much better, greedier results than the tacitly accepted axiom of maximizing your personal number of points - regardless of which of the two axioms the opponent is using.
So I know that 98-0-1-0-1 is the correct solution given these assumptions, but I don't think that these assumptions describe perfectly logical beings. They describe perfectly logical beings with a particular algorithm for determining their choices - an algorithm that performs well in some games and poorly in others.
What if each pirate had to stipulate their decision making process ahead of time (but, again to prevent the question from being meaningless, they cannot change their decision making system after hearing those of the other pirates). Do you think that the decision making system you are saying that pirates B through E are using would be the one that nets them the most coins? In fact we know that for pirates B through E, accepting this axiom produces the worst or second worst possible result individually and the third worst possible result collectively. I understand that to make the challenge make sense we have to have pirates that are purely logical and that are greedy - but we don't have to have pirates that are programmed to do extremely poorly.
Post a Comment