## Flipping less coins is more

Starting off where we left last time, I want to find coin-flipping games for which the chance you win either equals $\displaystyle \frac{p}{q}$ for some positive integers p and q (with $p < q$) or the chance you win equals x for some real x (with $0 < x < 1$). Like I said in my last post, solving the first of these 2 problems is just generalizing our way of solving the problem with $p = 1$ and $q = 3$; we were able to design a game which we would win $\displaystyle \frac{1}{3}$ of the times, by flipping the coin 2 times, and eliminate a possibility by just restarting our game if that possibility would occur. So, if we want to come up with a game which we win p out of the q times, all we have to do are the following three things:

1) Flip k times, such that the number of different outcomes (which is just $2^k$), is at least q
2) Eliminate $2^k - q$ possibilities, which implies that you are essentially left with q possibilites
3) Define p (out of that q) outcomes such that, if one of those actually happens, you win.

And because I already explained in my last post why we were, in a way, forced to do it like this, I hope it speaks for itself without giving another example. I also want to move on to my second question, of finding a game which we win x of the times, for any real number between 0 and 1, because I believe that this question is much more interesting. And last but not least, this also has the great benefit of preventing me from thinking ‘loltldr’ when I have to proof read this post 🙂

Alright, before we start our positive thinking towards our goal, let’s hold still for a moment and think in a ‘negative’ way; why should this question be any harder? That is, what prevents our last algorithm from working if it has to deal with real numbers? Well, that should be quite obvious; our way of dealing with fractions highly depended on the fact that they were fractions; we used the fact that p and q are natural numbers in every (!) step. So at first it seems that we have to come up with some ‘significantly new’ ideas. While this is partly true, there is, and this problem is not the only example of the following, a way to use the things we already know to solve our problem, instead of us having to invent something new.

Ok, so we have to move from rationals to reals. And at present our algorithm doesn’t seem to generalize or modify that easily. And there are shitloads of reals, while substantially less rationals. But, intuitively speaking, there are a lot of rationals too! While this kinda looks like a dumb, if not stupid, remark, I actually mean the following more rigorous fact: between any two rational numbers, there is another one. That is, they are dense.  To connect this with real numbers; we can approximate every real number arbitrarily closely by fractions. And because ‘arbitrarily close’ is something that, in my experience, most non-mathematicians don’t grok directly, I want to digress just a little.

If you give me some real number, say $\sqrt{2}$, I could say that $\displaystyle \frac{3}{2}$ is pretty close to it. The difference is only $\approx 0.086$. But you might say, well, $\displaystyle \frac{7}{5}$ is even closer to $\sqrt{2}$, because they only differ by $\approx 0.014$. I could then beat you again by giving $\displaystyle \frac{17}{12}$ as an approximation to it, since $\displaystyle \frac{17}{12} - \sqrt{2} \approx 0.002$. And so on, and so on. We could, if our lives were in fact that dull, beat each other every time by finding fractions that get closer and closer to $\sqrt{2}$. This should give you an intuitive meaning of the following, slightly more abstract, fact: For every $\epsilon > 0$, there is some fraction, say $\displaystyle \frac{p}{q}$, such that: the difference between $\displaystyle \frac{p}{q}$ and $\sqrt{2}$ is less then $\epsilon$. And this is what is meant by arbitrarily close. And this obviously works for every real number, rather than just $\sqrt{2}$. To put it another way: for every real number and every margin of error, there exists a rational number that ‘equals’ the real number within that margin of error (as long as the margin of error is larger than 0).

Enough of this fun-postponing stuff, let’s solve some problems. Thus, going back to our problem, we know that, even though our original algorithm isn’t going to work, we can get really close. Does this help at all? Maybe not. Then why did I bother telling you all this? Because it should give you some intuitive understanding of how to proceed your thought process on it. Somewhat even less elaborate: fractions are static. Their decimal expansion is ‘predictable’, which means nothing more than that it starts to repeat from some point on. This in sharp contrast with (non-rational) real numbers, whose decimal expansion is always infinite and will never repeat itself indefinitely. But they are made of the same ‘stuff’ and, as we saw, rationals can get arbitrarily close to reals. So what does this mean for our algorithm? Maybe the basics are ok, but it’s all just too static. Let’s for example look at our solution of the original problem again (the $\displaystyle \frac{1}{3}$-question). While it’s basically a random process, in a way it’s too static; you always start over if you flip two tails, you always win if you flip two heads, etc. So, to be genuinely vague, what we should do is something like the following:

Step 1) Flip $A_1$ times. You win if one out of $p_1$ outcomes occurs, you lose if one out of $q_1$ outcomes occurs, and go to step 2 in the other $2^{A_1} - p_1 - q_1$ cases.
Step 2) Flip $A_2$ times. You win if one out of $p_2$ outcomes occurs, you lose if one out of $q_2$ outcomes occurs, and go to step 3 in the other $2^{A_2} - p_2 - q_2$ cases.
..
Step n) Flip $A_n$ times. You win if one out of $p_n$ outcomes occurs, you lose if one out of $q_n$ outcomes occurs, and go to step $(n+1)$ in the other $2^{A_n} - p_n - q_n$ cases.
..

This may or may not be some sort of solution. And even if it is, it’s one of the less rigorous one’s you could possibly imagine. Because there are at least two things unsatisfactory about it: first of all it’s not clear that for every real number $x$, there exist $A_i, p_i$ and $q_i$ for every $i$, such that this always leads to a chance of $x$ of winning (hell, it’s not even clear that there should exist just one non-rational real number for which this works). Second of all, even if we could prove that this scheme always works, wouldn’t it be a lot sweeter if we were actually able to explicitly generate $A_i, p_i$ and $q_i$ for every $i$ and a given real number $x$? Clearly, I think it would. So how to go about this? If you read this post so far and thought about it carefully, you should be able to come up with some line of thought to could answer this question. Basically, our scheme demands us to define $A_i, p_i$ and $q_i$ such that at every next step, we get a better approximation of $x$. It seems natural to try to hold $A_i$ fixed and to let the chance of having to go to the next step be as small as possible. So let’s fix $A_i = A$ and, while still allowing $p_i$ and $q_i$ to vary, assume $2^A - p_i - q_i = 1$ for every $i$. So that the only way of having to go to the next step is when you throw, say, tails $A$ times in a row. This implies that the chance of actually arriving at step n equals the chance of throwing $A$ times tails, $(n-1)$ times in a row. And this is $\displaystyle \frac{1}{2^{A(n-1)}}$. Multiply this with the chance of getting lucky at step n (which is, almost by definition, $\displaystyle \frac{p_n}{2^A}$), and sum over all possibilities, and you get the chance of winning our game; $\displaystyle \sum_{n=1}^{\infty}{\frac{p_n}{2^{An}}}$. But, and this is the final clue, this sum just defines a number whose base $2^A$-representation has $p_n$ as its n-th digit! Reversing this process and noticing that we can just as well set $A = 1$, we have the following:

Let x be given. Let $p_n$ be the n-th digit, when you write $x$ in binary (so $p_n$ is either 0 or 1 for every n). Then, at step n, you go to step (n+1) when you flip tails, lose if you flip heads and $p_n = 0$ and win if you flip heads and $p_n = 1$.

Even more, and to explain the title of this post, when you look at this algorithm carefully, you’ll notice that the expected amount of times you have to flip a coin is.. 2! Independent of the given $x$! Which, to me, is just unintuitive at every sight. Still, kinda cool though 🙂