The Samovar


Fast fractals with Python and numpy

This will be of little interest to people who regularly read my blog, but might be of some interest to people who find their way here by the power of Google.

The standard way to compute fractals like the Mandelbrot set using Python and numpy is to use vectorisation and do the operations on a whole set of points. The problem is that this is slower than it needs to be because you keep doing computations on points that have already escaped. This can be avoided though, and the version below is about 3x faster than the standard way of doing it with numpy.

The trick is to create a new array at each iteration that stores only the points which haven’t yet escaped. The slight complication is that if you do this you need to keep track of the x, y coordinates of each of the points as well as the values of the iterate z. The same trick can be applied to many types of fractals and makes Python and numpy almost as good as C++ for mathematical exploration of fractals.

I’ve included the code below, both with and without explanatory comments. This 400×400 image below using 100 iterations took 1.1s to compute on my 1.8GHz laptop:

mandel

Uncommented version:

def mandel(n, m, itermax, xmin, xmax, ymin, ymax):
    ix, iy = mgrid[0:n, 0:m]
    x = linspace(xmin, xmax, n)[ix]
    y = linspace(ymin, ymax, m)[iy]
    c = x+complex(0,1)*y
    del x, y
    img = zeros(c.shape, dtype=int)
    ix.shape = n*m
    iy.shape = n*m
    c.shape = n*m
    z = copy(c)
    for i in xrange(itermax):
        if not len(z): break
        multiply(z, z, z)
        add(z, c, z)
        rem = abs(z)>2.0
        img[ix[rem], iy[rem]] = i+1
        rem = -rem
        z = z[rem]
        ix, iy = ix[rem], iy[rem]
        c = c[rem]
    return img

Commented version:

from numpy import *

def mandel(n, m, itermax, xmin, xmax, ymin, ymax):
    '''
    Fast mandelbrot computation using numpy.

    (n, m) are the output image dimensions
    itermax is the maximum number of iterations to do
    xmin, xmax, ymin, ymax specify the region of the
    set to compute.
    '''
    # The point of ix and iy is that they are 2D arrays
    # giving the x-coord and y-coord at each point in
    # the array. The reason for doing this will become
    # clear below...
    ix, iy = mgrid[0:n, 0:m]
    # Now x and y are the x-values and y-values at each
    # point in the array, linspace(start, end, n)
    # is an array of n linearly spaced points between
    # start and end, and we then index this array using
    # numpy fancy indexing. If A is an array and I is
    # an array of indices, then A[I] has the same shape
    # as I and at each place i in I has the value A[i].
    x = linspace(xmin, xmax, n)[ix]
    y = linspace(ymin, ymax, m)[iy]
    # c is the complex number with the given x, y coords
    c = x+complex(0,1)*y
    del x, y # save a bit of memory, we only need z
    # the output image coloured according to the number
    # of iterations it takes to get to the boundary
    # abs(z)>2
    img = zeros(c.shape, dtype=int)
    # Here is where the improvement over the standard
    # algorithm for drawing fractals in numpy comes in.
    # We flatten all the arrays ix, iy and c. This
    # flattening doesn't use any more memory because
    # we are just changing the shape of the array, the
    # data in memory stays the same. It also affects
    # each array in the same way, so that index i in
    # array c has x, y coords ix[i], iy[i]. The way the
    # algorithm works is that whenever abs(z)>2 we
    # remove the corresponding index from each of the
    # arrays ix, iy and c. Since we do the same thing
    # to each array, the correspondence between c and
    # the x, y coords stored in ix and iy is kept.
    ix.shape = n*m
    iy.shape = n*m
    c.shape = n*m
    # we iterate z->z^2+c with z starting at 0, but the
    # first iteration makes z=c so we just start there.
    # We need to copy c because otherwise the operation
    # z->z^2 will send c->c^2.
    z = copy(c)
    for i in xrange(itermax):
        if not len(z): break # all points have escaped
        # equivalent to z = z*z+c but quicker and uses
        # less memory
        multiply(z, z, z)
        add(z, c, z)
        # these are the points that have escaped
        rem = abs(z)>2.0
        # colour them with the iteration number, we
        # add one so that points which haven't
        # escaped have 0 as their iteration number,
        # this is why we keep the arrays ix and iy
        # because we need to know which point in img
        # to colour
        img[ix[rem], iy[rem]] = i+1
        # -rem is the array of points which haven't
        # escaped, in numpy -A for a boolean array A
        # is the NOT operation.
        rem = -rem
        # So we select out the points in
        # z, ix, iy and c which are still to be
        # iterated on in the next step
        z = z[rem]
        ix, iy = ix[rem], iy[rem]
        c = c[rem]
    return img

if __name__=='__main__':
    from pylab import *
    import time
    start = time.time()
    I = mandel(400, 400, 100, -2, .5, -1.25, 1.25)
    print 'Time taken:', time.time()-start
    I[I==0] = 101
    img = imshow(I.T, origin='lower left')
    img.write_png('mandel.png', noscale=True)
    show()


My mathematical genealogy
February 16, 2009, 7:58 pm
Filed under: Academia, Frivolity, Mathematics

The Mathematics Genealogy Project has a huge database of mathematicians, showing who was supervised by whom, and what students everyone had. If you’re a mathematician, you can use this to trace back who your mathematical ancestors were and it can be quite fun. Below is a chart I made of my own mathematical genealogy. It’s nice to see exciting names from the history of mathematics and science there, such as Poisson, Laplace, Lagrange, d’Alembert, Euler, the Bernoullis, Leibniz,  and Huygens (I stopped at that point). The dates are when they finished their doctorate, or if they didn’t do one, when they lived.

mymathematicalgenealogy



Smallest number of keypresses
March 6, 2008, 1:39 am
Filed under: Frivolity, Mathematics

OK, this is not the entry I promised in my last blog entry, but…

What is the quickest way to type 250 c’s in a row in a standard Windows text field? I make it 24, can anyone beat that?

c, c, (press and hold) ctrl, a, c, v, v, v, v, v, a, c, v, v, v, v, v, a, c, v, v, v, v, v

Comments Off on Smallest number of keypresses


Countdown numbers game in Python

One of the things about being ill is that you have to spend a lot of time in bed with nothing much to do. Having watched the whole first series of the Sopranos, I had to find something else. So here’s the result. I revisited an old program I wrote many years ago to solve the Countdown numbers game.

In this game, you’re given six numbers between 1 and 100 and a target number between 100 and 999. You’re given 30 seconds to try to make the target using the six numbers and the operations plus, minus, times and divide.

I originally wrote a program to solve this many years ago (when I was about 14 I think), but the algorithm I used was pretty horrible. I worked out by hand all the possible arrangements of brackets you could have for six numbers, and then tried each operator and number in each appropriate slot. It worked, but it was ugly programming.

Recently I’ve been learning Python for an academic project, and so I thought I may as well try rewriting it in Python. I think the solution I’ve come up with is nicer than any of the solutions I’ve found on the internet (mostly written in Java or C), although having written it I found this paper which uses a very similar solution to mine (but in Haskell rather than Python).

Python programmers might get something from the minimal code below (all comments and docs stripped out), or you can take a look at the full source code here, including detailed comments and docs explaining the code and algorithm.

My ideal (as always with Python) was to write a program you could just look at and understand the source code without comments, but I don’t think I achieved that. I’d be interested if a more experienced Python programmer could do so. Let me know.

This version is incomplete, from the slower version, and is supposed to be understandable without explanations (takes about 40 seconds to find all solutions, too slow for Countdown):

def ValidExpressions(sources,operators=standard_operators,minimal_remaining_sources=0):
    for value, i in zip(sources,range(len(sources))):
        yield TerminalExpression(value=value, remaining_sources=sources[:i]+sources[i+1:])
    if len(sources)>=2+minimal_remaining_sources:
        for lhs in ValidExpressions(sources,operators,minimal_remaining_sources+1):
            for rhs in ValidExpressions(lhs.remaining_sources, operators, minimal_remaining_sources):
                for f in operators:
                    try: yield BranchedExpression(operator=f, lhs=lhs, rhs=rhs, remaining_sources=rhs.remaining_sources)
                    except InvalidExpressionError: pass

def TargetExpressions(target,sources,operators=standard_operators):
    for expression in ValidExpressions(sources,operators):
        if expression.value==target:
            yield expression

This version is actually complete, from the faster version which needs the comments to explain (takes about 15 seconds to run, good enough to win Countdown):

sub = lambda x,y: x-y
def add(x,y):
    if x<=y: return x+y
    raise ValueError
def mul(x,y):
    if x=2+minremsources:
        for e1, rs1, v1 in expressions(sources,ops,minremsources+1):
            for e2, rs2, v2 in expressions(rs1,ops,minremsources):
                for o in ops:
                    try: yield ([o,e1,e2],rs2,o(v1,v2))
                    except ValueError: pass

def findfirsttarget(target,sources,ops=standard_ops):
    for e,s,v in expressions(sources,ops):
        if v==target:
            return e
    return []


From playing games to the nature of knowledge
August 1, 2007, 1:36 pm
Filed under: Academia, Epistemology, Games, Mathematics, Neuroscience, Philosophy | Tags: , ,

I’ve been reading some interesting things about games, computers and mathematical proof recently. A couple of months ago, it was announced that the game of checkers (or draughts) had been “solved”: if both players play perfectly then the game ends in a draw. That’s sort of what you’d expect, but it’s not entirely obvious. It might have been the case that getting to go either first or second was a big enough advantage that with perfect play either the first or second player would win. So for example in Connect Four, if both players play perfectly then the first player will always win.

Checkers is the most complicated game to have been solved to date. The number of possible legal positions in checkers is 1020 (that is a one followed by twenty zeroes). By comparison, tic-tac-toe has 765 different positions, connect four has about 1014, chess about 1040 and Go about 10170 (some of these are only estimates).

There’s a strange thing about the terminology used. A game being “solved” doesn’t mean that there’s a computer program that can play the game perfectly. All it means is that they know that if the players did play perfectly, then the game would end in a certain way. So for example with checkers, it might be the case that you could beat the computer program Chinook (which was used to prove that perfect play ends in a draw). Counterintuitively, the way to do this would be the play a move that wasn’t perfect. The number of possible positions for checkers is too large for them to have computed what the best move is in every single one. They limited the number of computations they had to perform by using mathematical arguments to show that certain moves weren’t perfect without actually having to play through them. So, by playing a move that you knew wasn’t perfect (which means that if you played it against a perfect opponent you would certainly lose), you would force the computer into a position it hadn’t analysed completely, and then you might be able to beat it.

This is a bit like in chess, where a very good player can beat a good computer program by playing a strategy that exploits the way the computer program works. Chess programs work by looking as many moves ahead as possible and considering what the player might do and what the computer could do in response, etc. However, the combinatorial complexity of the game means that even the fastest computers can only look so many moves ahead. By using a strategy which is very conservative and gives you an advantage only after a large number of moves, you can conceal what you’re doing from the computer which has no intuitive understanding of the game: it doesn’t see the advantage you’re working towards because it comes so many moves in the future.

So at the moment, there is no perfect player for either chess or checkers, but the top computer programs can beat essentially any opponent (in chess this is true most of the time but possibly not every time). This raises the question: how would you know if you had a computer program that played perfectly? For games like chess and checkers, the number of possible positions and games that could be played is so enormous that even storing them all in a database might take more space than any possible computer could have (for instance the number of possible positions might be more than the number of atoms in the universe). Quantum computation might be one answer to this if it ever becomes a reality, but an interesting suggestion was recently put forward in a discussion on the foundations of mathematics mailing list.

The idea is to test the strength of an opponent by allowing a strong human player to backtrack. The human player can take back any number of moves he likes. So for example, you might play a move thinking it was very clever and forced your opponent to play a losing game, but then your opponent plays something you didn’t think of and you can see that actually your move wasn’t so great after all. You take back your move and try something different. It has been suggested that a good chess player can easily beat most of the grand master level chess programs if they are allowed to use backtracking. The suggestion is that if a grand master chess or checkers player was unable to beat the computer even using backtracking, then the computer is very likely perfect. It’s not a mathematical proof by any means, but the claim is that it would be very convincing evidence because the nature of backtracking means that any weaknesses there are in the computer program become very highly amplified, and any strengths it has become much weakened. If it could still beat a top player every time, then with very high probability there are no weaknesses to exploit in it and consequently it plays perfectly. (NB: My discussion in this paragraph oversimplifies the discussion on the FOM list for the sake of simplicity and brevity, but take a look at the original thread if you’re interested in more.)

This leads to all sorts of interesting questions about the nature of knowledge. At the one end, we have human knowledge based on our perceptions of the world and our intuitive understanding of things. Such knowledge is obviously very fallible. On the other end, we have fully rigorous mathematical proof (which is not what mathematicians do yet, but that’s another story). In between, there is scientific knowledge which is inherently fallible but forms part of a self-correcting process. Scientific knowledge always get better, but is always potentially wrong. More recently, we have probabilistic knowledge, where we know that something is mathematically true with a very high probability. Interactive proof systems are an example of this.

The argument above about backtracking in chess suggests a new sort of knowledge based on the interaction between human intuition and computational and mathematical knowledge. These newer forms of knowledge and the arguments based on them are very much in accord with my view of the pragmatic nature of knowledge. My feeling is that interactions such as this between computational, non-intuitive knowledge, and human intuitive understanding will be very important in the medium term, between now and when artificial intelligence that is superior to our own both computationally and intuitively becomes a reality (which is I feel only a matter of time, but might not be in my lifetime). At the moment, these new forms of knowledge are really only being explored by mathematicians and computer scientists because the human side is not yet well enough understood. It will be interesting to see how this interaction between human and computational/mathematical understanding will develop as our understanding of how the human brain works improves.

If you found this entry was interesting, you might also be interested in two unfinished essays I have on epistemology without truth and the philosophy of mathematics.



You do knot see that very often
May 9, 2007, 2:43 pm
Filed under: Mathematics

As I was clearing away some computer cables, I saw a rather unusual knot.

knot-photo.jpg

It’s unusual because you don’t normally see such a topologically complicated knot. The most common knot seen in tangled wires is the trefoil knot, the simplest possible knot:

trefoil-knot.jpg

A word on this picture: mathematical knots are always loops, imagine cutting the knot in the picture above and pulling apart the two cut ends to see what it would look like in wires. Click on the picture for more information than you probably want to know about this knot.

So anyway, returning to my cable knot. I wondered to myself: just how unusual is this knot, and which knot is it? This is actually a more difficult question than you might think. In general, the problem of working out if any two knots are the same is very difficult. Mathematically, two knots are the same if you can go from one to the other by pulling and rearranging but without cutting. In fact, determining this in general leads to some of the most abstruse and difficult of modern mathematics, including the subject of my PhD thesis, hyperbolic geometry.

Fortunately, this is a fairly simple knot and I thought it was likely I’d be able to identify it. First thing to do was to make the knot as simple as possible. I used KnotPlot software to input my knot, and it rearranged it to make it a bit prettier:

knot-3d.jpg

It also computed the knot’s HOMFLY polynomial for me, which is a good way of identifying knots. For every knot, you can calculate an algebraic expression called the HOMFLY polynomial – this expression is the same however you rearrange it. One way of identifying knots is to compute this polynomial, look up in a table which knots have the same polynomial, and see if you can by hand rearrange your knot to show that it’s the same knot. This worked for my knot.

In the picture above, there are 8 ‘crossings’ (that is, points where one thread crosses another). But, you can rearrange the knot fairly easily so that it only has 7 crossings (the loop at the bottom right that goes through the loop at the top right can be jiggled around a bit so that there are 2 crossings in this area rather than the 3 you can see in that picture. Every knot has a ‘crossing number’ which is the smallest number of crossings you can rearrange the knot to have. The trefoil knot has a crossing number of 3, and it’s the only knot with only 3 crossings. Having rearranged my knot, I knew that it had a crossing number of 7 or less. So I loaded up the tables of all knots whose crossing number is 7 or less using KnotInfo. It turns out there are only 14 of them. Looking up the HOMFLY polynomial I found mine, it’s called ‘7-2‘, which means it’s the second knot in the table with a crossing number of 7. Here is the nicest way of drawing it:

knot-line.jpg

In answer to my original question: there are seven knots in the table which are simpler than this knot, and seven which are equally complex (measured in terms of the crossing number). So in some sense it’s the 8th (equally) likely knot to come across.

If you’re interested in the mathematical theory of knots, try this Wikipedia page.

Oh, and in case you wanted to know, the HOMFLY polynomial for this knot is:

a2 + a2z2 + a4z2 + a6 + a6z2 – a8

Now back to work…



Nice new proof of Godel’s theorem
March 24, 2007, 2:18 pm
Filed under: Mathematics

I’ve had this sitting around for a while. Harvey Friedman wrote a new proof of Godel’s theorem on the FOM mailing list. I’m writing this entry so that I have a reminder of it for myself, but others might find it interesting too.

Intuitively speaking, Godel’s theorem says that every mathematical system has statements which are true but can’t be proved within that system. The normal way to think of this is that you can’t prove the statement “This statement cannot be proved” but nonetheless it’s true. You can’t prove it because if you could it would be true, and then it would say that you couldn’t prove it which would be a contradiction. But it must be true, because if it was false then there would have to be a proof of it, and so it would have to be true, which is a contradiction. This might seem reminiscent of the paradoxical sentence “This sentence is false”, but there is a subtle difference which I probably don’t want to go into in too much detail here. (The difference between truth and provability.)

Friedman’s proof uses Turing machines. Without going into any technical details – a Turing machine is basically just a computer program. A computer program can be represented as a single number. Just take the file on your computer; it’s a string of 1s and 0s, which is just a (very large) number in binary. So we can number every possible computer program. Write TM[n] for the computer program whose corresponding number is n. Actually, I’m going to restrict myself to computer programs which take one input (say the number m), and return one output. Given some computer program and an input m, the program may never stop running. For example, the program

DO

LOOP

never stops running. The custom when talking about Turing machines is to say that the program doesn’t halt.

So here is Friedman’s proof. We write a computer program P that does the following: Given an input m,  it searches for a proof that “TM[m] does not halt when given input m”. That is, it searches for a proof that program m, when given input m, will eventually stop running and return some number. If there is a proof, then P will eventually find it, otherwise P itself will not stop running.

Here’s where it gets clever. The program P is itself just a computer program that takes a single input m, and so it must have a number, call that number k. In other words, P=TM[k] for some number k. Keep that in mind.

So what happens if we run program P with input k? We can work this out by considering two alternatives. Either there is or isn’t a proof of the statement “TM[k] does not halt when given input k”. If there is a proof then running program P with input k will find the proof, and therefore this program halts at input k. But we’re assuming that there is a proof that “P=TM[k] does not halt at input k”, so this is a contradiction.

Therefore, there is no proof that “TM[k] does not halt at input k”, but nonetheless it is true.

If you want to know more (that’s just a very sketchy and technically inaccurate overview), then take a look at Harvey’s message to FOM above. If you want to understand it but can’t (e.g. you might ask yourself having read it: what is PA and what is Con(PA)?) , just post a message here. If you found it interesting, you might also be interested in reading up on The Halting Problem (which is closely related).