A Puzzle To Die For

Came across this very cool puzzle at Brisbane's Museum of Science: they'd cut a die into nine pieces and it was up to you to put it back together. Being a big fan of polyhedra the puzzle instantly appealed to me, but I also like the fact that it forced people to think of how a die is constructed. For example, which faces are opposite which? Given there were only nine pieces it wasn't hard putting it back together again but Katrine and I had fun solving the puzzle nonetheless.

Mathematica, Pi, Mandelbrot, OpenCL, and More

As usual I was looking for something completely different (pages devoted to the game Hex, but that's another story) and came across this intriguing finding by Dave Brol.

Dave was investigating whether the "neck" of the Mandelbrot set was infinitely thin, and in the process realized that pi was showing up in the number of iterations he was performing. What the?!

Curious, I whipped up a quick Mathematica function to test his observation (and as another excuse to play with Mathematica :-)

First I define Mandelbrot's function, here called mandelIter. Nice, I can actually use complex numbers in Mathematica. This function is usually written to include a stopping condition based on a max number of iterations. I chose to omit it here since the points we're investigating are supposed to eventually escape the set.

The point in question is (-0.75 + x i) where x -> 0. The table shows what occurs for ever more "precise" (i.e. closer to 0) values of x. The right hand column lists pairs of numbers. The left one is the time it took to execute mandelIter in seconds. The right one is the number of iterations before escaping the set.

Notice anything special regarding those iteration counts? Yup, we're re-discovering pi.

The other thing you may have noticed is that it's going to take a long long time to calculate pi this way. Each subsequent finding takes 10 times longer to compute.

Naturally, I wondered how to make mandelIter faster.

Mathematica includes a Compile[] function, which basically turns a function into an optimized version, though it's still in pcode, not machine code.

My first attempt was just to compile the Abs[] and z = z^2 + c functions, but this resulted in trivial speed improvements. The solution was to let Mathematica compile the whole function.

Much faster! Unfortunately, it runs into problems and never returns when I go to 10^-8:

My assumption is that compilation uses machine-level precision instead of the arbitrary precision that's Mathematica's default. Since we're squaring very small numbers, we're rapidly exceeding this precision. It's pretty cool that Mathematica realizes this and switches back.

Still, I was hoping for more.

One of the features new to Mathematica 8 is the ability to leverage your computer's GPU, i.e. your graphics processor. Modern graphics chips actually have a massive amount of computing power and are highly parallelized. You typically program these chips with either CUDA (which is NVidia specific) or OpenCL (an industry standard). As of Snow Leopard, Apple includes OpenCL drivers with its Macs.

Re-implementing the algorithm was pretty easy: you're writing C code with an OpenCL flavor. Previous versions calculated each row sequentially. This version calculates them in parallel. After all, there are 48 (!) cores on my laptop's GPU :-)

mandelCLsrc takes 5 arguments: two input vectors, one output vector, the length of the vectors, and the max iterations we want to compute. Then I compile it with OpenCLFunctionLoad[]. Notice the "8" parameter: that means we want to leverage 8 cores to process this function.

If you're doing any OpenCL programming, definitely use OpenCLFunctionLoad[]'s ShellOutputFunction -> Print option, it'll help you find errors. While I'm on the subject: if Mathematica stops compiling OpenCL functions or behaves strangely, just restart it. OpenCL gives you very low level memory access, it's not hard to corrupt things.

So how did this solution fare? Disappointingly :-(

I define my two input vectors, xt and yt, and one output vector nt. Sadly, while the first few iterations are promising, it appears that we quickly run out of precision here too. With one thousandth as input, the function should return in roughly 31,420 iterations. Clearly it does not.

I tried switching from floats to double, even though my chip apparently doesn't support them, this was no help.

Where to go from here? Mathematica's ParallelEvaluation[] function, which splits jobs across multiple Mathematica kernel, won't really help. Another option could be to switch to integer arithmetic for Mandelbrot's function though I'm not convinced this would improve Mathematica's speed. There's always a chance that I may have done something wrong in my OpenCL code, or that further optimizations are possible in the Mathematica code. A final option that I couldn't get to work was running the OpenCL code on my CPU instead of my GPU. This is theoretically possible and would leverage the CPU's ability to handle higher precision doubles.

Implementation in a different language may be the ultimate solution. C, C++, or Java make most sense to me given they have excellent optimizing compilers but, just for kicks, I picked one of my favorite programming languages: ruby.

Here are the results running on ruby 1.9.2 (ruby 1.8.7 is about 1/2 as fast):

1,          3,        2.0e-05
1/10,       33,       4.4e-05
1/100,      315,      0.000312
1/1000,     3143,     0.003746
1/10000,    31417,    0.031083
1/100000,   314160,   0.300709
1/1000000,  3141593,  3.02154
1/10000000, 31415927, 30.608127

To my surprise modern ruby is significantly slower than compiled Mathematica at this task. I would have expected ruby to fare better.

Interestingly, both ruby and Mathematica bog down significantly when computing 10^-8. They don't return in a meaningful timeframe, i.e. I killed both calculations after 30 minutes. Presumably this is the point where both switch to arbitrary length floating point routines in software.

You have 100 feet of fencing, what shape will give you the biggest field?

That's the question I asked my sons today. It's an interesting problem but to make it a little more interesting and apropos given we're in Australia, I asked them: "If you were a crocodile farmer and only had a 100 foot long fence, what shape would you arrange it in to have the most space for your reptiles?"

I got many different answers: Square! Pentagon! Circle! Oval! Most people assumed square was the right answer but didn't all want to pick the same shape.

So we decided to start with a more general square: a rectangle. Thanks to a length of rope (representing our 100 foot fence) we were able to prototype various areas. A square seemed best. Was that so?

We turned to Mathematica (I just bought the newly released version 8, love it) for confirmation:

The maximum area is achieved when a side is equal to 25 feet. In other words: a square gives us the max area of 625 ft^2.

What about other shapes? Mathematica to the rescue!

A triangle makes poor use of fencing, the square is an improvement but clearly the more sides our fence has, the greater the area... The optimal shape must be a circle!

Though they saw that the circular field's area was greater than the square field's, this still didn't hit home until I plotted both.

Circles rock!

Fractals without a Computer

We often think of fractals as computer-generated but they're all around us, from coastlines to trees. This video shows how a few ingenious people generate some of the most famous "computer fractals" entirely by analog means: a few cameras and a feedback loop. Watch closely and you'll see Julia sets, Sierpinski gaskets, and more! Another way to remember Benoit Mandelbrot's recent passing.

More details here.
 

Exploring the Science of Sounds with Wolfram|Alpha and Audacity

Saturday evening at bedtime, our 9 year old son Daniel told me: "Dad, I saw this program on TV. And it was two people talking, just like you and me. It was about sound and sound waves. I want to learn everything about sound. Will you teach me?"

Music to my ears :-)

So I pulled out two tools of choice: Wolfram|Alpha and Audacity. W|A is a computational search engine and, as my boys used to say, hecka useful (using "hecka" doesn't appear to be cool anymore BTW). Audacity is a swiss army knife audio recorder, editor, analyzer, and effects machine available for Mac, PC, and Linux.

Our "lesson" went back and forth between these two tools, with the real world intruding once in a while...

First off: What is sound? We talked about compression & expansion of air, how our ears work, and how sound does and doesn't travel through different mediums (like water & space).

Then I introduced the concept of Hertz. Here we used Audacity to generate an ever increasing tone from 1 to 300 Hz over 60 seconds. When do we start hearing sound? (On my Macbook Pro's speakers, around 100Hz).

Once Hertz were understood (and we'd talked about the cool low Hz rumbles our subwoofer generates) it was time to break out Wolfram|Alpha, which has some useful sound generation features.

We initially played a simple 440Hz tone (which is the standard tuning for guitars and probably other instruments too).

I'll spare you the lengthy and enjoyable experimentation that followed on W|A :-) We tried many things: different tonescombining tones, combining prime tones, even combining lots of tones and wondering whether a small change in one of the tones was still noticeable.

At this point I decided we needed a real time spectrum analyzer (Audacity can do the job, but it isn't real time). In the old (OK, old old :-) days I'd have used the one on my NeXTstation, very useful in tuning my guitar, but today I found AudioXplorer instead. Pity it's discontinued, but fortunately it's free, open sourced... and for OS X (I'm sure there are many equivalents on Windows).

Here's that 440Hz tone again...

With AudioXplorer in hand, we headed to the piano and "looked" at lots of different frequencies. It was fun to watch Daniel's interest as he played ascending notes and saw the sonogram showing a staircase pattern. (More experimentation ensued).

Next we were back to Audacity: recording Daniel's voice and applying many fun effects to it. Echo, tempo change, reverb, and more. 

Finally, back to Wolfram|Alpha one last time: experimenting with DTMF. I had Daniel press one of the keys on a phone, then we used W|A to reproduce it. Here's the tone for the "1" key. Unfortunately W|A sometimes introduces annoying clicks when generating the sounds, however you can use AudioXplorer instead, which gives correct output.

Once we'd understood how DTMF tones work and how to generate them, we used Audacity to generate a whole phone number and I showed Daniel the trick I'd promised him earlier: how to call someone without hitting any keys on the phone's keypad. We held the handset's microphone up to the laptop's speaker, press play, and... Magic! The call went through! 

All in all we had great fun with these tools. Daniel and I learned a lot together. What does he want to do now?

GarageBand! :-)

La Belle Geode

La Geode is a fascinating structure in Paris' Cite des Sciences. It's a 36 meter diameter IMAX movie theater built in 1985 and has the only 12.1 surround sound system in the world (can't say I noticed). More impressive are the ~6,500 triangles that coat the sphere: they are placed in groups of 4 with a tolerance of within 1/10 millimeter. No triangles actually touch each other to allow for expansion due to temperature variations. Best of all? The reflections!

The Addiator: Ingenious Mechanical Pocket Calculator

Growing up I loved to play with my father's Addiator, a small mechanical calculator that could easily add and subtract numbers... It wasn't a PDA but a PMA: Pocket Mechanical Assistant :-)

It's based on a very clever idea: parallel tooth metal strips had numbers from 0 to 9 printed on them, as well as a "flag" (in my addiator's case a red arrow) to indicate that you need to carry the operation over to the next column. This mechanism was originally invented in the late 1800s by a Frenchman named Louis Troncet. I've included a picture from his 1889 patent that shows you the inner working of the device. You can view a number of his patents here.

The German company Addiator manufactured devices like mine from the 1920s to the early 1980s when digital calculators finally made them obsolete. Many other companies made them too, and not just to add / subtract either

This short video shows you how simple and effective Troncet's invention was.