- #1
Korg
- 6
- 0
So it's been such a long time since I studied statistics at school/college/university that I'm rusty to say the least and hoped you guys could help me out with this.
Something is telling me that these two games must have a different value despite being similar, but if you brilliant folks could confirm that and go into more detail then that would be great :)
Game 1
If we wager £100 of our own money on roulette with an RTP (return to player) of 97.3%. We get given £10 cash.
So in this case the game clearly has a positive EV and it is £7.30. (We expect to lose £2.70 from the first part and then get given £10 for free)
Game 2
If we wager £50 of our own money on roulette with an RTP of 97.3%, we then get given a bonus of £10.
This £10 is different to before though when it was cash. This time we have to wager the bonus 5 times (£50 of play) before it will be released as cash.
So we have to place another £50 worth of roulette bets to release the bonus. But we will never be betting with our own money, if our £10 becomes zero we just stop, no need to bet our own money since the bonus is gone. If after we've placed £50 worth of bets with the bonus £10 and it is now a bonus £20 then it is now worth £20 in cash.
So in both scenarios we have placed £100 worth of bets and given £10 for free. But would I be wrong in saying game 2 has a higher value because of the 'stop-loss' element of the bonus? We don't have to bet with our own money if the bonus becomes worthless.
Am I right in saying the variance of the game makes a difference, so game 2 might have a different value if we bet on numbers rather than red/black? Would I be right in saying game 2 has a different value if we lower/raise our stakes (and thus decrease/increase the variance)?
Sorry if I'm not being clear! I appreciate any insight and a refresher in my old statistics classes!
Something is telling me that these two games must have a different value despite being similar, but if you brilliant folks could confirm that and go into more detail then that would be great :)
Game 1
If we wager £100 of our own money on roulette with an RTP (return to player) of 97.3%. We get given £10 cash.
So in this case the game clearly has a positive EV and it is £7.30. (We expect to lose £2.70 from the first part and then get given £10 for free)
Game 2
If we wager £50 of our own money on roulette with an RTP of 97.3%, we then get given a bonus of £10.
This £10 is different to before though when it was cash. This time we have to wager the bonus 5 times (£50 of play) before it will be released as cash.
So we have to place another £50 worth of roulette bets to release the bonus. But we will never be betting with our own money, if our £10 becomes zero we just stop, no need to bet our own money since the bonus is gone. If after we've placed £50 worth of bets with the bonus £10 and it is now a bonus £20 then it is now worth £20 in cash.
So in both scenarios we have placed £100 worth of bets and given £10 for free. But would I be wrong in saying game 2 has a higher value because of the 'stop-loss' element of the bonus? We don't have to bet with our own money if the bonus becomes worthless.
Am I right in saying the variance of the game makes a difference, so game 2 might have a different value if we bet on numbers rather than red/black? Would I be right in saying game 2 has a different value if we lower/raise our stakes (and thus decrease/increase the variance)?
Sorry if I'm not being clear! I appreciate any insight and a refresher in my old statistics classes!