Archive for June, 2010

An elementary proof of the irrationality of Pi

June 29, 2010

I´ve been wondering for a while now if it exists and if so, how it looks like. By elementary I mean something like: completely (elementary) number theoretically. Because I know there are a few easy proofs that \pi is not rational, but all of the ones I’ve seen, make use of some calculus (like computing an integral or so). So, my question: Can anyone show me the irrationality of \pi with just the very basics of number theory or tell me  why it is/shouldn’t be possible? For example, a few years ago I tried to prove it using the the formula: \dfrac{\pi}{2} = \displaystyle \sum_{k=0}^\infty \dfrac{k!}{(2k + 1)!!} along with basic arguments which you may find in some proofs of the irrationality of e = \displaystyle \sum_{k=0}^\infty \dfrac{1}{k!}. Although (obviously) I wasn’t able to prove  the desired result, I did not find a reason why it shouldn’t be possible in principle. Even more, I had the gut feeling I was almost there. So I might give it another shot it the future, but hopefully that doesn’t stop you, my dearest reader, from helping me if you’ve got some info on this.

I´m not dead

June 25, 2010

A lot of people (yes Jojo, that’s to ensure your anonimity), complained that I didn’t post enough. So to show my liveliness:

Q: What did the farmer say when he lost his tractor?

A: Where’s my tractor?

Winning suffices. Apparently.

June 19, 2010

Beauty is the first test: there is no permanent place in this world for ugly football.

G. H. Hardy

— A Mathematician’s Apology, London, 1941

 

Maybe math-unrelated, but it is still rant-worthy material; our Orange.
I’m not sure if you watched them play against Japan today, but to give you a little summary; we sucked. Big time. While that was highly disappointing, what really frustrated me were the people on tv commenting on it. Basically, their story was:

‘Well, it was not very good, but we won and that’s all that matters. Even more, winning when you perform sucky is how you win a tournament. The Germans and Italians are kind of used to play defensively too, and they always do good at tournaments, while the times we played beautifully didn’t get us very far. So uglyness suffices and may be even preferable.’

Maybe it’s not necessary, but let me tell you why that’s all completely senseless.

I know that playing ugly is sometimes the rational thing to do. If you do not possess the qualities to play beautifully, or your opponent plays in such a way that attractive football is counterproductive, well, let’s do in it a different way then. But, and this is something that no-one seems to grasp, that doesn’t imply any of the above statements. Most importantly, playing ugly is totally different from playing utterly bad! It amazes me that everyone (unconsciously?) believes or acts like that they are equal, while I’m not even going to bother to explain why they’re not. Second of all, yes, by playing defensively we do look more like Italy. But thinking that that makes us more likely to win the world cup is clearly nonsense. Because we also look more like the team of Mozambique (which has very decent defenders*). Obviously more noteworthy, our qualities don’t lie with our defense. So it’s not a good plan to lean back on it, I guess. If you realise that the one thing we excel in, is attacking, you don’t have to be Einstein to realise that that’s the thing we should do! Another blatant fallacy is thinking that the fact that we won both matches so far is all that matters. Of course, you have to win on your off-days (and that’s what we’re usually quite bad at) and I’m more than happy with the results so far. But the fact that we won despite the fact we didn’t play well, doesn’t imply that we’re doing great. It just implies that we didn’t play well. So we’ve got every reason to fear the knock-out stages and should be lucky that we won by an own goal against Denmark and against Japan because of a keeper’s error! And in any case, even when some teams don’t ‘allow’ us to play attractive football, how on earth is Japan, with all due respect, one of these?

*I have no idea

Nothing is impossible (?)

June 19, 2010

Something’s been bothering me for a while now. While my statistical intuition is usually quite sharp, in this case I’m wrong, although I never really heard a convincing argument why exactly. So I’ll try to explain my problem, and hopefully someone with a sound argument could reply. Basically, my confusion starts with the sentence ‘Some event that has a chance of 0 of happening, can still happen.’ Let P(x) be the chance that x happens. To me `P(x) = 0‘ is equivalent to the statement: ‘We are certain that x will not be the case’. So saying that P(x) = 0 doesn’t exclude x from happening feels just like a contradictio in terminis, to me. For example, the chance that you throw a (normal) die and a 7 turns up*. But this is a silly example, because everybody agrees that the chance of this happening is 0 and also that it is impossible. But now, consider the following: If we pick a random number, the chance we pick a certain number N is clearly 0. Since this holds for every N, you might argue that this implies I’m wrong, because apparently something happened which had a chance of 0. To me this just implies that it’s not possible to choose a number randomly. A different example: we flip a coin until heads turns up. Is it possible that this game never ends? To me, it’s not. Of course, it could take arbitrarily long, but it can’t take an infinite number of flips. Can it?

While trying to emphatize with the people who believe that ‘impossible events are possible’, I came up with the following: let’s say we pick a random real number from the unit interval. Since there are uncountably many reals and only a countable number of rationals, the chance to pick a fraction is 0. But you could still say: ‘well, it’s only random. And fractions actually do exist in abundance. So there is no obvious way that it shouldn’t be possible to choose a fraction by accident’. While this idea makes sense to some extent, I believe it’s wrongheaded. Mostly because, like I said above, this just seems to be based on the belief that it is actually possible to pick a number in some truly random way.

In any case, this is not meant as a convincing argument against the idea of the impossible becoming possible. I’m just trying to show where I get confused, and I am in need of some explanation. And after all, I’ve never seen anything in my life happen that had a zero chance of happening. So the burden of proof lies with the people who claim that 2 is a random integer ;).

*maybe it actually could happen by some weird quantum effect, but mathematically speaking, it’s not an issue

PS. Terry Tao is going to set up another mini-Polymath Project next month. Check it out.

All your linearly independent spanning sets are belong to us!

June 17, 2010

(for the ignorant) Yes yes, I own zhe matrices! At last, I, think I, passed my Lineair Algebra-exam. And since I’ve got 5 more exams to learn in the following week, (probably) no more updates until I finished them all. Bai bai

Our world will never be the same

June 11, 2010

A steady loss of visitors since day 1. Not a single comment yet. Not even a nomination for a Bloggie. But, dont worry; BLAG is still going strong! With today´s one month anniversary as one of its many, many highs! There will be celebrations all over the world. In Johannesburg, South Africa, a football game will be played in honour of the weblog that changed the way people think and made world peace, finally, look like an achievable short-term goal. In Amsterdam, The Netherlands, people will write blogs about the blog that, arguably, is one of the greatest things that happened to human kind, since the invention of the bananabox.

Yes, people. Today, will be no longer the day of the Alcatraz escape. Nor will anyone ever remember June 11th to be the date Troy was burned. Starting this year, the eleventh day of the sixth month will be known as BLAGDAY! Spread the word!

Only integers behave nicely

June 8, 2010

This week I solved the following PEN-problem:

Find out which pairs of real numbers (\alpha, \beta) have the property that: \alpha [\beta n] = \beta [\alpha n] for all natural numbers n, where [x] denotes the largest integer \le x

At first glance, this problem looked quite easy to me. While I’m not sure if that’s a good sign in general, in this case it was advantageous. Because some problems that are stated in the form ‘find all x such that y’, are only solvable by providing a way of finding those x, where that way of finding, immediately shows that you’re right. This might be vague, so let me give you an example. Consider the problem of finding all primes less than 10000. I guess that no one will just write down the list of primes. No, probably everyone will just use the sieve of Eratosthenes. And not only will that sieve give you all primes below 10000 it is/should be immediately clear why it works and generates all these primes. So using the sieve of Eratosthenes gives the answer and proves you`re right in one blow. While that’s awesome, in general it’s not that easy to straightforwardly find a solution that strong. Going back to the problem at hand, you may feel that it begs for an impossibility proof, except for trivial or ‘degenerate’ cases. In other words, we already ‘found’ our solution, we only have to prove it’s right. In still other words, we already know how or where to start (for example: assume a true non-degenerate case) and where to end (in contradiction). And knowing how a proof could reasonably look like, before you even started, is a big advantage, I think.

Ok, so far the preliminary remarks. To concretise the above part, the degenerate cases are:

1) \alpha \beta = 0
2) \alpha, \beta \in Z
3) \alpha = \beta

It’s a trivial matter to show that all these cases indeed solve \alpha [\beta n] = \beta [\alpha n]. Now, I kind of simplified things in the above paragraph. Because there is definitely not 1 way to prove we’re right that these cases are the only ones that solve the above equation. We could assume that 1), 2) and 3) all don’t hold and end up in contradiction, we could assume 0 < |\alpha| < |\beta| and try to prove \alpha, \beta \in Z, we could (since proving that \alpha is integral iff \beta is, is trivial), assume that \alpha, \beta \notin Z and show that \alpha = \beta must hold, etc. There are tons of distinct ways to end up where we want to end up. But, in a way, they’re all the same too, of course.

One of the first things I did was assuming that both \alpha and \beta are positive. This is something I do a lot, because, well, I’m just not that fond of negative numbers. Moreover, usually the general case quite easily simplifies to the positive case and in here I felt it was a natural thing to do. Although it turned out that we actually do lose generality, without an extra argument, by assuming positivity, it was still a useful pet problem, so let’s assume positivity for now anyway. The next thing I tried was dividing both sides by \beta[\beta n] to get our first result: \dfrac{\alpha}{\beta} = \dfrac{[\alpha n]}{[\beta n]} is rational! While this is not much, it accounts for something. This inspired me to do a few things; First of all I filled in n = 1 and found out for which real numbers this equation holds by letting: \alpha = a + \gamma, \beta = b + \delta for integral a,b such that 0 < \gamma, \delta < 1. Second of all I noticed that this argument doesn’t work if a = b = 0, because then we just have \dfrac{\alpha}{\beta} = \dfrac{0}{0}, and there’s not much to conclude from that. So I tried to prove ab \neq 0, in which I succeeded (see below). Third of all, since we know that \dfrac{\alpha}{\beta} \in Q, it seems natural to (try to) show that \alpha and \beta can’t be distinct, rational, non-integral numbers. As it turns out, this is quite straightforward, so to solve our problem, the following suffices:

Let \alpha,\beta be distinct, irrational numbers. Then \alpha [\beta n] \neq \beta [\alpha n] for at least 1 positive integer n.

Like I said, proving that \alpha, \beta > 1 (while assuming positivity and distinctness), is possible, so here is its proof: Let \alpha < \beta. If \alpha < 1, \beta < 1 as well, otherwise we would have: \displaystyle \frac{[\alpha]}{[\beta]} = 0 < \frac{\alpha}{\beta}, which is a contradiction. So we may assume 0 < \alpha < \beta < 1. Let N be the smallest integer such that \beta N > 1. Then \alpha N must be larger than 1 as well, because otherwise we would have the same contradiction as above. But we now have: \displaystyle \frac{[\alpha N]}{[\beta N]} = \frac{1}{1} = 1 = \frac{\alpha}{\beta}, which implies \alpha = \beta. So if \alpha and \beta are distinct and positive (although I hardly used the positivity condition), \alpha [\beta n] = \beta [\alpha n] can’t hold for all natural numbers n.

Now that we showed that, we can definitely say we are on our way. A cool way to go further is by using induction. When you see an (moderately easy) equation that should hold for all natural numbers n, it’s often possible to show that it indeed holds by using induction on n. However, here it might be possible to go the other way around, by using induction on \alpha! More precisely; assume that \alpha [\beta n] \neq \beta [\alpha n] for some natural number n. Show that a natural number m exists for which (\alpha + 1) [\beta m] \neq \beta [(\alpha + 1) m] (except degenerate cases). And since we proved our base case (0 < \alpha, \beta < 1), this suffices. I must say I think this way of attacking our problem is both weirdly awesome and awesomely weird. But when I first tried it, the computations didn’t make any sense, so I chose a different approach. Although I might give it some thought again, since I have a better understanding of the problem now. Anyway, for me this wasn’t the way to go, so let’s try something else.

The good part is: we didn’t use the, to me, most obvious way of approaching this problem, extensively yet. So let’s do that. I believe that, whenever you’re dealing with the ‘greatest integer smaller than’-function, it’s usually a great idea to split that what’s inside the gist-function in two parts: an integer part and a rest part. For example: [9.87] = [9 + .87] = 9 + [.87]. So let’s, like I said above briefly, assume \alpha = a + \gamma, \beta = b + \delta with a, b \in Z and 0 < \gamma, \beta < 1. Then we have:

\alpha [\beta n] = \beta [\alpha n]
(a + \gamma) [(b + \delta) n] = (b + \delta) [(a + \gamma) n]
(a + \gamma)(bn + [\delta n]) = (b + \delta)(an + [\gamma n])
(a + \gamma) [\delta n] + \gamma bn = (b + \delta) [\gamma n] + \delta an
\alpha [\delta n] + \gamma bn = \beta [\gamma n] + \delta an

We know that this must hold for all n \in \aleph. In particular, it should hold for n = 1. When n = 1, the above equation simplifies to:

\gamma b = \delta a

So we know that this must be the case. But then \gamma bn and \delta an always cancel each other out, so the equation becomes:

\alpha [\delta n] = \beta [\gamma n ]

And the great news is that we basically solved this equation already, when we showed the impossibility of distinct, positive integers \alpha, \beta such that \alpha [\beta n] = \beta [\alpha n] for \alpha and \beta smaller than 1. But, for completeness’ sake; let N be the smallest positive integer such that \max(\gamma N, \delta N) \ge 1. Then \min(\gamma N, \delta N) \ge 1 as well, because otherwise we would have 0 on one side of the equation while the other side is non-zero. Since our equation is true for all n \in \aleph, it must be true for N too. And for this N, it further simplifies to:

\alpha = \beta

And we’re done.

Periodicity, Part 1

June 5, 2010

Today (and, as my title sort of implies, later as well), we’re going to talk about 2 functions and find out if and when they are periodic. And since the data I have available, makes me believe that my thoughts on these issues are ‘different’, in the sense that most people would solve these problems in a rather different way, it would be cool to have some feedback. So if you think that I’m ‘wrong’, because an easier approach is available, do tell. I will use the notation that the n-th iteration of a function is f^n(x), which, recursively defined, is f(f^{n-1}(x)), where f^0(x) = x. If the initial value of the function doesn’t really depend on x, as is true in the function I’ll deal with today, I will just type f^n for the n- th iteration. The two functions I’ll talk about, are the following:

1) f(x) = \displaystyle \frac{2 + x}{1 - 2x}, with initial value 2.
2) f(x) = \displaystyle 1 - \left|1 - 2x\right|, with initial value x \in [0, 1]

And because today I’ll only be talking about the first f(x), there will not be any confusion, hopefully. The problem to prove is: f^n(x) won’t become periodic from some point on. This, btw, is a problem from the PEN-book. Also, since it’s been a while that I thought about this problem, I can’t really recall the things I tried, but didn’t work. So I’ll just present my solution of it in a slightly more formal manner than previous posts. Still, I’ll try to keep things really basic, so that might mean that you’ll encounter stuff that seems trivial. You can then just safely skip those parts.

Theorem. f^a = f^b implies a = b

Proof of Theorem. To proof our Theorem, we first need to show that there is no (nonnegative) integer n for which f^n = 0. To proof this last assumption, we need a few lemma’s.

Lemma 1. f^n \in Q for all n \in \aleph. (Q is the set of rational numbers)

Proof of Lemma 1. Note that f^0 = 2 \in Q. We proceed via induction. Assume f^k = \dfrac{p}{q} for some k \ge 0. Let p' = 2q + p, q' = q - 2p. Then:

f^{k+1} = f(f^k)
= \dfrac{2 + p/q}{1 - 2p/q}
= \dfrac{2q + p}{q - 2p}
= \dfrac{p'}{q'}

Lemma 2. f is injective

Proof of Lemma 2. Assume f(a) = f(b). Then:

\displaystyle \frac{2 + a}{1 - 2a} = \frac{2 + b}{1 - 2b}
(2 + a)*(1 - 2b) = (2 + b)*(1 - 2a)
2 - 4b + a - 2ab = 2 - 4a + b - 2ab
5a = 5b
a = b

Lemma 3. Since we know that f is injective, f must have an inverse. This inverse equals: f^{-1}(x) = \dfrac{x - 2}{1 + 2x}

Proof of Lemma 3. Let \dfrac{x - 2}{1 + 2x} =y. Then:

x - 2 = y + 2xy
x - 2xy = 2 + y
x(1 - 2y) = 2 + y
x = \dfrac{2 + y}{1 - 2y}
= f(y)

Note that we can use this inverse function to calculate f^n, provided we know the value of f^{n+1}. So assume k to be the smallest (nonnegative) integer for which f^{k+1} = f(f^k) = 0. Then: f^k = f^{-1}(f^{k+1})) = f^{-1}(0) = -2 = -f^0.
We’ll generalize this last equation using induction. Assume that, for some i with: k \ge i \ge 0, f^{k-i} = -f^i. Then:

f^{k-(i+1)} = f^{-1}(f(f^{k-(i+1)}))
= f^{-1}(f^{k-i})
= \dfrac{f^{k-i} - 2}{1 + 2f^{k-i}}
= \dfrac{-f^{i} - 2}{1 - 2f^{i}}
= -\dfrac{2 + f^{i}}{1 - 2f^i}
= -f^{i+1}

So we may conclude that, for every i with k \ge i \ge 0, f^{k-i} = -f^i. Now we have to consider 2 cases separately; whether k is even or odd;

Case 1) k = 2m. Then we must have f^m = -f^m, which implies that f^m = 0 and this obviously contradicts the definition of k.

Case 2) k = 2m+1. Then we must have:

f^m = -f^{m+1}
= -f(f^m)
= -\dfrac{2 + f^m}{1 - 2f^m}
f^m(1 - 2f^m) = -(2 + f^m)
f^m - 2(f^m)^2 = - 2 - f^m
2(f^m)^2 - 2f^m - 2 = 0
f^m = \dfrac{1 \pm \sqrt{5}}{2} \notin Q

So we are forced to conclude that there is no (nonnegative) integer k satisfying f^{k+1} = 0. We may now proceed to the proof of our Theorem. For this part of the proof we assume k to be the smallest (nonnegative) integer for which a different integer k’ exists such that: f^k = f^{k'}. Then we must have k = 0, because otherwise we would have: f^{k-1} = f^{-1}(f^k) = f^{-1}(f^{k'}) = f^{k'-1}, which clearly contradicts our (new) definition of k. So f^{k'} = f^k = f^0 = 2. But this too leads to a contradiction if we apply our inverse function one more time;

f^{k'-1} = f^{-1}(f^{k'})
= f^{-1}(2)
= 0

And we already proved that this equation is not solvable for k' \in \aleph. Thus we must conclude that there are no 2 different integers a and b for which: f^a = f^b. In other words: f^a = f^b implies a = b.

Self Delusion

June 2, 2010

After I (think I) solved an interesting problem that I found online, say on a forum, I usually go back to that forum to see other people´s solutions. To check if I ´forgot´ some ideas and could´ve solved the problem differently, or how people jot down their solutions if they had similar ideas, etc. What I find remarkable, but maybe that´s my bad, is that it often happens that I have to conclude that my solution is just wrong. Although my ideas almost always coincide with (some of) the ´right´ ideas, I just don’t seem to be able to make things rigorous (although I think I did) or I made some stupid mistake in a standard calculation or so. Apparently I’m extremely sloppy on the details. And that would not be that big of a problem if I would realise that and would be able to fix it by thinking very hard about every step I take in a proof to make sure I didn’t make any mistakes. But the catch is, I think I do just that. But I guess that, when I have an idea that could work, I start to obsess with that idea so much, that I stop being objective and start thinking I’m right about anything related to that idea. And this is bad. Really bad.

On a different, but maybe not totally unrelated note, I love to work with actual numbers. So with, say, 1233 (which btw equals 12^2 + 33^2), instead of a’s, b’s or x’s. Although I think it’s not directly a problem if I, for example, first calculate some values of a function to see how fast it grows, the more general issue of only being able to solve a problem unless it’s really specific and not thinking about generalizations unless they’re obvious, might be a problem.

But then I realised that Ramanujan was like this too.