Jump to content

Running It Once Vs Twice: Can You Prove Ev Is The Same?!


Recommended Posts

There has been much discussion on this forum of whether "running it twice" has an effect on the expected value (EV) of the hand. Several of us---including Daniel himself---have argued that running it once versus twice makes no difference in the EV.I agreed but that was based on intuition alone; intuition is often the foe of the mathematician which brings me to the reason for this post.Can you offer a mathematical proof of the following statement? "The EV of running a hand once is the same as the EV of running it twice regardless of how many cards are left in the deck or the number of outs a player has."Of course, an example is not a proof. Neither is 100 examples. We are looking for a bona fide proof here. I would go so far to doubt very seriously if the poker pros themselves know why this is in fact true.I have a proof that I shall share soon. In the meantime, give it some thought and enjoy! :club:

Link to post
Share on other sites
There has been much discussion on this forum of whether "running it twice" has an effect on the expected value (EV) of the hand. Several of us---including Daniel himself---have argued that running it once versus twice makes no difference in the EV.I agreed but that was based on intuition alone; intuition is often the foe of the mathematician which brings me to the reason for this post.Can you offer a mathematical proof of the following statement? "The EV of running a hand once is the same as the EV of running it twice regardless of how many cards are left in the deck or the number of outs a player has."Of course, an example is not a proof. Neither is 100 examples. We are looking for a bona fide proof here. I would go so far to doubt very seriously if the poker pros themselves know why this is in fact true.I have a proof that I shall share soon. In the meantime, give it some thought and enjoy! :club:
x = pot sizea = % chance of winning itEV = axEV of running it twice = (x/2)a + (x/2)aax = (x/2)a + (x/2)aax = ax/2 + ax/2ax = axare you serious? unless this is somehow not "correct" mathematical proof. whatever.. englighten us. this can't possible be correct if you're making such a big deal out of it, because a middle schooler in algebra I could figure this out.
Link to post
Share on other sites

The % chance of winning the pot on each run isn't fixed. Suppose you only have 1 out in the deck. Then the % chance you win it on the second run is affected by what happened on the first run, i.e. it is not constant.

Link to post
Share on other sites

well, i could make a longer equation by using "a" and "b" and making them representative of the odds of winning each half of the pot. it would be longer, but it would still be algrebra. how is this for "mathematically inclined"? it seems to be more for those with basic mathematics skills who have a lot of time. sure, throw in the non-replacement factor and it becomes much more complicated but hell why not throw in redraws, backdoor outs, etc etc too. hey why not throw in a 3rd player and wild cards?your question is just like asking "how often will AKs beat 72o with 5 cards to come, NOT calculated emphrically? ridiculously complicated to calculate which gives little to nothing of value.

Link to post
Share on other sites
well, i could make a longer equation by using "a" and "b" and making them representative of the odds of winning each half of the pot. it would be longer, but it would still be algrebra. how is this for "mathematically inclined"?
I didn't say the proof would be sophisticated. It is indeed just algebra. Either you want to prove it or you don't: saying there is some a and some b that work is hand waving. How do you know until you sit down and look at the details? Again, don't get me wrong: I'm not saying the proof is anything outside the realm of a person who passed algebra class. I'm just saying the devil is in the details. :club:
Link to post
Share on other sites

I did something like a proof this morning. Okay, it's not a real proof and it makes some silly assumptions and simplifications, but here's how it goes:We assume that one player is on a "draw" and has x outs to win with two cards to come. Of course, x could be above 24 meaning he is really ahead, but this doesn't matter. For the sake of this example, we will assume that x is greater than 3.We then enumerate the 16 possibilities of the next four cards (the first turn and river and the second turn and river). We will call hitting an out H and missing an out M:HHHH 1HHHM 1HHMH 1HHMM .5HMHH 1HMHM 1HMMH 1HMMM .5MHHH 1MHHM 1MHMH 1MHMM .5MMHH .5MMHM .5MMMH .5MMMM 0Then we calculate the amount earned by each of these scenarios. For example, if it goes HH HH, the player with the "draw" wins both times, so they get 1. If it comes HH MM, the player with the draw gets 1/2. We can find the expectation of the outcome by multiplying the percentage of the pot earned by each scenario by the probability of that scenario. I'm not going to enumerate the way to get the percentage of each scenario, because it's both straight forward and annoying, but here is an example:HHMM:(1/(48*47*46*45)) * (x)(x-1)(46-(x-2))(45-(x-2))So, we multiply all of these by their corresponding percentage of the pot (1, .5, or 0) and sum them all and we get the following equation of ev based on x outs:(95x - x^2)/(2256)We can do the same steps with only one turn and river and find the equation:(1/(48*47))*((x)(x-1)+(x)(47-(x-1))+(48-x)(x))=(95x - x^2)/(2256)This gives us the same equation as above. Thus, we have demonstrated in this model that, with any number of "outs," the expectation is the same. I noted above that x was greater than 3. I decided to impose this because, when calculating the probability of certain hands, the term x-3 came up. Basically, this means that if the number of outs is 3 or less, the outcome HHHH can not happen, and other outcomes are similarly made impossible with smaller values of x. Of course, this only leaves three examples (x=1, x=2, x=3) out of our proof, and these can easily be demonstrated manually as examples. It took me a few tries to get every probability right, as a small error of course would make the numbers not match up. I used a TI-89 to actually do the algebra, since it would be a pain otherwise. I was thinking about doing something more sophisticated using counter outs or something, but that would be much longer.To fully get what you are looking for (ie the regardless of the amount of cards in the deck) one could easily change some of the constants involved. It's trivial, so I didn't include it explicitly. One can just replace 48 with y, 47 with y-1, etc. One should come up with the following equation for ev based on x outs with y cards in the deck:1/((y)(y-1))*((x)(x-1)+(x)(y-1-(x-1))+(y-x)(x))= -(x)(x-2y+1)/((y)(y-1))

Link to post
Share on other sites

The proof is based on the well known fact from Probabilty Theory that the expectation of the sum of random variables is the sum of their expectations, EVEN IF THEY ARE NOT INDEPENDENT. This is a deceptively powerful fact. In math notation, if X and Y are random variables then E(X+Y) = E(X) + E(Y)in factE(aX+ bY) = aE(X) + bE(Y) where a,b are constants.For example when running the Turn-River twice. X is the random variable based on the first two cards put out, Y is the random variable based on the second two cards. X and Y are 1 if they produce a win, 0 for a loss. Even though X and Y are not independent the above fact about expectations still holds. When running it twice you're asking, what isE(.5X + .5Y) ?The additive Theorum for Expectatons says it's.5E(X) + .5E(Y)But if you ran either the first set of two cards or the second set of two cards by themselves, just one time using those cards, you should have no problem seeing that One Time, those two cards are just as likely to produce a win as any other two in the deck. In other words, One time by itself, E(X) is just the probabilty that X=1 as it should be. But the Y cards are just as good for running One Time by themselves ( just burn the other cards without looking )and E(Y) is the same probabilty of winning.So,.5E(X) + .5E(Y) = .5*prob(the X cards produce a win when run one time) +.5*prob(the Y cards produce a win when run one time) =.5*prob(the X cards produce a win when run one time) +.5*prob(the X cards produce a win when run one time) =.5*(2*prob(the X cards produce a win when run one time)= E(X) = Expectation running it once.The nice thing about looking at it this way is that the same argument holds if you were running the whole board twice, or the whole board for an Omaha Hand where you would not want to list all the cases for cards that could come out.It rests on two principles. The additivity for Expectations of Random Variables. And the observation that for the purposes of running it one time, with all cards sight unseen, any cards you want to pick out of the deck are as good as any others. :club:

Link to post
Share on other sites
well said abba
These are the comments that come from people that believe a certain statement is true, but they either (1) have no idea WHY it is true, or (2) think it is true for an incorrect reason.
The proof is based on the well known fact from Probabilty Theory that the expectation of the sum of random variables is the sum of their expectations, EVEN IF THEY ARE NOT INDEPENDENT. This is a deceptively powerful fact. In math notation, if X and Y are random variables then E(X+Y) = E(X) + E(Y)in factE(aX+ bY) = aE(X) + bE(Y) where a,b are constants.For example when running the Turn-River twice. X is the random variable based on the first two cards put out, Y is the random variable based on the second two cards. X and Y are 1 if they produce a win, 0 for a loss. Even though X and Y are not independent the above fact about expectations still holds. When running it twice you're asking, what isE(.5X + .5Y) ?The additive Theorum for Expectatons says it's.5E(X) + .5E(Y)But if you ran either the first set of two cards or the second set of two cards by themselves, just one time using those cards, you should have no problem seeing that One Time, those two cards are just as likely to produce a win as any other two in the deck. In other words, One time by itself, E(X) is just the probabilty that X=1 as it should be. But the Y cards are just as good for running One Time by themselves ( just burn the other cards without looking )and E(Y) is the same probabilty of winning.So,.5E(X) + .5E(Y) = .5*prob(the X cards produce a win when run one time) +.5*prob(the Y cards produce a win when run one time) =.5*prob(the X cards produce a win when run one time) +.5*prob(the X cards produce a win when run one time) =.5*(2*prob(the X cards produce a win when run one time)= E(X) = Expectation running it once.The nice thing about looking at it this way is that the same argument holds if you were running the whole board twice, or the whole board for an Omaha Hand where you would not want to list all the cases for cards that could come out.It rests on two principles. The additivity for Expectations of Random Variables. And the observation that for the purposes of running it one time, with all cards sight unseen, any cards you want to pick out of the deck are as good as any others. :D
:club:
Link to post
Share on other sites
These are the comments that come from people that believe a certain statement is true, but they either (1) have no idea WHY it is true, or (2) think it is true for an incorrect reason. :club:
I actually understand the why running the board multiple times benefits players that have the best hand. For example, in heads up play, if player A moves all in with 9 :) 9 :)(after the flop) and player B calls with A :D K :D, on a board of 9 :D 4 :) J :) , they decide to run the board until the deck is dead (I know this will never happen but I am making a point). Player A, in the long run, will win 75% of the pot and player B will win 25% of the pot. If you havn't noticed these are the exact odds (or very very close) to the odds of who will have the winning hand at the end of the hand. The only thing running the board multiple times does is it reduces variance. If the players decide to run the board only once and a :) falls, player A loses the whole pot if the board doesn't pair. I know what you are saying that the situation changes with what cards fall but, in the long run, running the board multiple times reduces variance and helps hedge against hugs swings. Before you try to berate somebody please understand what you are talking about.BenIf this is the REAL Andy Beal, I have no problem saying you are better at math and stat theory than I am but show me where I went wrong.
Link to post
Share on other sites

In fact, PairTheBoard, it follows thatEV(running it once) = EV(running it n times WITHOUT replacement) = EV(running it n times WITH replacement) :D:club:

I actually understand the why running the board multiple times benefits players that have the best hand.
Please define "benefit" for me. Do you mean fixed EV with reduced variance?If so, then all I was asking was for you to offer mathematical proof of why the EV was fixed. I don't see in this thread where you accomplished that.I'm not trying to berate you and this is nothing personal. My entire point was to spark discussion on this tidbit (which is very applicable in poker theory,): How does one "know" a statement is true if one cannot offer a valid proof of the statement? (And, by the way, saying "I know it's true because, well, it just is!" or "Everyone knows that!" is not a proof.)
Link to post
Share on other sites

I don't know why people were so hostile to this thread. I thought it was a great idea for a thread and was one of the more interesting things I've seen at FCP in a while. Maybe it's just because, in most high school and maybe even college educations, people don't see enough proofs and aren't exposed to their value and beauty. Andy Beal, if you have any other poker theorems that you want us to prove, please do so.

Link to post
Share on other sites
I don't know why people were so hostile to this thread. I thought it was a great idea for a thread and was one of the more interesting things I've seen at FCP in a while. Maybe it's just because, in most high school and maybe even college educations, people don't see enough proofs and aren't exposed to their value and beauty. Andy Beal, if you have any other poker theorems that you want us to prove, please do so.
Nail, meet head! ;-)I do have some more (and not all that I know the answer to!) and I enjoy discussing them with eveyone...
Link to post
Share on other sites

I would just also like to point out that this was a fact that didn't seem immediately clear to me (ie does the fact that maybe some of the cards you need to win the second time came out the first time make a difference) so to see the proof was great (but then again, I'm a fan of proofs).To use a phrase mathematicians use for an especially pretty proof, your proof, PairTheBoard, is "from the book."

Link to post
Share on other sites
I don't know why people were so hostile to this thread. I thought it was a great idea for a thread and was one of the more interesting things I've seen at FCP in a while. Maybe it's just because, in most high school and maybe even college educations, people don't see enough proofs and aren't exposed to their value and beauty. Andy Beal, if you have any other poker theorems that you want us to prove, please do so.
Hello.Ive done three years of stats for my degree, and this is something that would be filed under "trivial".How you interperet "running it twice" is exactly what the answer is.If you run it twice for half the pot reshuffling the same dead cards, it's obviously identical. If you do it twice for the whole value, it doubles in scale.If you want to give consideration to what happens if you don't reshuffle dead cards, it's more complicated and probably shouldnt be handled on intuition alone. But that isnt how it's done, as far as i know.
Link to post
Share on other sites
Hello.Ive done three years of stats for my degree, and this is something that would be filed under "trivial".How you interperet "running it twice" is exactly what the answer is.If you run it twice for half the pot reshuffling the same dead cards, it's obviously identical. If you do it twice for the whole value, it doubles in scale.If you want to give consideration to what happens if you don't reshuffle dead cards, it's more complicated and probably shouldnt be handled on intuition alone. But that isnt how it's done, as far as i know.
Actually none of the three cases is complicated. The same argument works for all three.If you know that expectation is a linear function/functional (the key here is knowing that that holds even if the events are not independent), then they are all equally filed under "trivial". :blush:And people normally do NOT reshuffle the dead cards back into the deck. But that doesn't matter, because the beauty of the theorem is that all three cases have the same EV!
Link to post
Share on other sites
Im saying that there's at least some reason to 'prove' the latter. The former is just ridiculously obvious.
But the latter (the nontrivial thing) was precisely what was asked to start this thread in the first place. Seeing that the former was equal (by the same linearity of the expectation argument) was just gravy.
Link to post
Share on other sites

Not to belabor this, but the proof also can be modified to show that other, less intuitive, ways of running it twice also have the same expectation as running it once; for example, running it once, then reshuffling only the second card back in the deck and then running it again, etc, thus making this even cooler (if that was even possible).

Link to post
Share on other sites
Guest
This topic is now closed to further replies.
×
×
  • Create New...