A couple of months ago, I got the semi-psychotic idea to search for numbers and strings in the digits of the square root of 2. The idea is hardly unique. There has been a fair amount of research into this area, a variation on the infinite monkeys typing on a typewriter theorem.
Why Root 2?
Because it wasn't Pi. Pi is the probably the most studied math constant, so I wanted to start with something fundamental that wasn't pi.
I needed an irrational constant, one that generated an infinite stream of random numbers, and root 2 was as good a choice as any. I might argue that root 2 is even more commonly encountered than pi, in that it is found in the hypotenuse of a 1x1 square. Surely, 1x1 squares are more common than circles in the modern world.
The first challenge was to find or generate the integers of root 2. With a quick search, I found up to 10 million digit files that had been created by NASA. While a good start, I wanted a lot more than 10 million digits. I ended up using the 1 million file from NASA as a control file to make sure that any other method I used to generate digits was validated by NASA. My starting target was 1 billion digits, but I wanted the ability to go to an arbitrary number.
When I started thinking about how to generate digits with a computer, I realized there were some unique hurdles in trying to calculate infinite precision numbers. First, standard machine data types fail at the task. 64-bit integers give you a large work space, but floating point numbers are notorious about losing precision. The math libraries of most languages work to about 15 digits, so those were out.
I found more than 10 methods documented to compute square roots.
The first attempt to calculate the square root of 2 was actually written by a colleague who was intrigued enough to implement the Babylonian method in clojure. The first few tests seemed promising and matched the NASA file up to roughly 25,000 digits. Where it broke down was after about 20 iterations where the run time exceeded 18 hours. Getting to a very large number of digits did not appear to be practical.
The next attempt was one I wrote using the Duplex method (in ruby). Duplex has the advantage of computing one digit at a time and I was hoping this would avoid some of the problems with doing multiplication with millions of digits. There may have been a flaw in my ruby implementation because it got off track past digit 53 when I compared the results with the NASA file. I was careful to follow the example I found in Wikipedia, but I must have made some kind of mistake because it always diverged.
Standing on Shoulders
Like any good programmer, I quickly decided to search for working code that someone else had written. I found a very old C++ implementation, but also a handful of working programs. The one I tried first was PiFast.
PiFast is a Windows program that can calculate many constants in addition to Pi. Since my main computer is a Macbook, I loaded it into my VMware virtual machine running XP. As you might imagine, it was more than a little CPU intensive. After a few short trial runs, I let it loose on a 1 billion digit computation that for all intents locked my machine for about 11 hours. At the end, I had a file of 1 billion digits, the first million of which validated perfectly against the NASA file.
To validate and test the file -- a 1 gigabyte file -- I wrote a number of scripts (ruby is my scripting language of choice these days) to test the digits, and manipulate the file in several ways.
In my next post, I'll talk about how I converted digits to letters and what my early searches turned up.