Filed under: Academia, Epistemology, Neuroscience, Philosophy | Tags: science-technology fallacy
On the comp-neuro mailing list, James Schwaber writes about the science-technology fallacy:
A lot of the discussion about the ‘right way to model’ or what to model may be a version of what my friend Mike Gruber has termed a version of the science-technology fallacy, the idea that you have to understand a process analytically before you can use it, and he always quotes Carnot here–thermodynamics owes more to the steam engine than ever the steam engine owes to thermodynamics. Obviously, humans used and controlled fire for 100,000 years before Lavoisier explained what fire was, and planes flew for decades before there was a theory to explain how they did it. In fact theory ‘demonstrated’ that heavier-than-air flight was impossible.
This is an interesting point. Is it true? If so, what is the point of science?
I have some ideas, but I’d like to see what others have to say.
Filed under: Academia, Neuroscience | Tags: neural network, Programming, python, simulation
I’m part of a small group working on a new scientific endeavour, and we need your help!
We’re writing a new piece of software to simulate the behaviour of networks of neurons (nerve cells in the brain). Well, we’ve solved the differential equations, worked out the technical problems, written the code, but now we’re stuck. We’ve hit a barrier, our own CDD (creativity deficit disorder).
We need a name.
And it needs to be so damned snappy that as soon as someone hears it they’ll want to download our software and stop using the competition (with names like Neuron, XPP and Nest).
So can you help? Your scientific community needs you!
Boring technical details follow for those who are interested (not necessary for thinking up a cool name): The software is going to be written in Python, with some bits possibly in C++, using SciPy and vectorised code for efficient computations. The emphasis is on the code being easy to use for people without much experience in programming, and easy to extend. Initially, the software will focus on networks of simple model neurons rather than detailed anatomical models (although we might get round to adding that later).
Filed under: Academia, Epistemology, Games, Mathematics, Neuroscience, Philosophy | Tags: Checkers, Chess, draughts
I’ve been reading some interesting things about games, computers and mathematical proof recently. A couple of months ago, it was announced that the game of checkers (or draughts) had been “solved”: if both players play perfectly then the game ends in a draw. That’s sort of what you’d expect, but it’s not entirely obvious. It might have been the case that getting to go either first or second was a big enough advantage that with perfect play either the first or second player would win. So for example in Connect Four, if both players play perfectly then the first player will always win.
Checkers is the most complicated game to have been solved to date. The number of possible legal positions in checkers is 1020 (that is a one followed by twenty zeroes). By comparison, tic-tac-toe has 765 different positions, connect four has about 1014, chess about 1040 and Go about 10170 (some of these are only estimates).
There’s a strange thing about the terminology used. A game being “solved” doesn’t mean that there’s a computer program that can play the game perfectly. All it means is that they know that if the players did play perfectly, then the game would end in a certain way. So for example with checkers, it might be the case that you could beat the computer program Chinook (which was used to prove that perfect play ends in a draw). Counterintuitively, the way to do this would be the play a move that wasn’t perfect. The number of possible positions for checkers is too large for them to have computed what the best move is in every single one. They limited the number of computations they had to perform by using mathematical arguments to show that certain moves weren’t perfect without actually having to play through them. So, by playing a move that you knew wasn’t perfect (which means that if you played it against a perfect opponent you would certainly lose), you would force the computer into a position it hadn’t analysed completely, and then you might be able to beat it.
This is a bit like in chess, where a very good player can beat a good computer program by playing a strategy that exploits the way the computer program works. Chess programs work by looking as many moves ahead as possible and considering what the player might do and what the computer could do in response, etc. However, the combinatorial complexity of the game means that even the fastest computers can only look so many moves ahead. By using a strategy which is very conservative and gives you an advantage only after a large number of moves, you can conceal what you’re doing from the computer which has no intuitive understanding of the game: it doesn’t see the advantage you’re working towards because it comes so many moves in the future.
So at the moment, there is no perfect player for either chess or checkers, but the top computer programs can beat essentially any opponent (in chess this is true most of the time but possibly not every time). This raises the question: how would you know if you had a computer program that played perfectly? For games like chess and checkers, the number of possible positions and games that could be played is so enormous that even storing them all in a database might take more space than any possible computer could have (for instance the number of possible positions might be more than the number of atoms in the universe). Quantum computation might be one answer to this if it ever becomes a reality, but an interesting suggestion was recently put forward in a discussion on the foundations of mathematics mailing list.
The idea is to test the strength of an opponent by allowing a strong human player to backtrack. The human player can take back any number of moves he likes. So for example, you might play a move thinking it was very clever and forced your opponent to play a losing game, but then your opponent plays something you didn’t think of and you can see that actually your move wasn’t so great after all. You take back your move and try something different. It has been suggested that a good chess player can easily beat most of the grand master level chess programs if they are allowed to use backtracking. The suggestion is that if a grand master chess or checkers player was unable to beat the computer even using backtracking, then the computer is very likely perfect. It’s not a mathematical proof by any means, but the claim is that it would be very convincing evidence because the nature of backtracking means that any weaknesses there are in the computer program become very highly amplified, and any strengths it has become much weakened. If it could still beat a top player every time, then with very high probability there are no weaknesses to exploit in it and consequently it plays perfectly. (NB: My discussion in this paragraph oversimplifies the discussion on the FOM list for the sake of simplicity and brevity, but take a look at the original thread if you’re interested in more.)
This leads to all sorts of interesting questions about the nature of knowledge. At the one end, we have human knowledge based on our perceptions of the world and our intuitive understanding of things. Such knowledge is obviously very fallible. On the other end, we have fully rigorous mathematical proof (which is not what mathematicians do yet, but that’s another story). In between, there is scientific knowledge which is inherently fallible but forms part of a self-correcting process. Scientific knowledge always get better, but is always potentially wrong. More recently, we have probabilistic knowledge, where we know that something is mathematically true with a very high probability. Interactive proof systems are an example of this.
The argument above about backtracking in chess suggests a new sort of knowledge based on the interaction between human intuition and computational and mathematical knowledge. These newer forms of knowledge and the arguments based on them are very much in accord with my view of the pragmatic nature of knowledge. My feeling is that interactions such as this between computational, non-intuitive knowledge, and human intuitive understanding will be very important in the medium term, between now and when artificial intelligence that is superior to our own both computationally and intuitively becomes a reality (which is I feel only a matter of time, but might not be in my lifetime). At the moment, these new forms of knowledge are really only being explored by mathematicians and computer scientists because the human side is not yet well enough understood. It will be interesting to see how this interaction between human and computational/mathematical understanding will develop as our understanding of how the human brain works improves.
Hmm, it strikes me that three posts in one night (and at least four in a row) about food suggest I’m a rather frivolous person. Well, hey! I’ve been on holiday for a week. What can I say? Anyway, in partial mitigation of my consumption-related posting:
I have a plan, and that plan is to read a book by every Nobel prize for literature winning author. (Well not the poetry. I can’t hack poetry for some reason.) So… opinions? Pointless exercise or not? Any of them I should particularly prioritise or avoid? Will I even be able to get copies of books by all of them? Some of them seem pretty obscure.
So far, I’ve only read 16 of them. How many have you read? (This could be a meme, but this is a very serious post, so let’s not make it one.)
Before I wrote these entries I read this paper:
The timing of action potentials in sensory neurons contains substantial information about the eliciting stimuli. Although the computational advantages of spike timing–based neuronal codes have long been recognized, it is unclear whether, and if so how, neurons can learn to read out such representations. We propose a new, biologically plausible supervised synaptic learning rule that enables neurons to efficiently learn a broad range of decision rules, even when information is embedded in the spatiotemporal structure of spike patterns rather than in mean firing rates. The number of categorizations of random spatiotemporal patterns that a neuron can implement is several times larger than the number of its synapses. The underlying nonlinear temporal computation allows neurons to access information beyond single-neuron statistics and to discriminate between inputs on the basis of multineuronal spike statistics. Our work demonstrates the high capacity of neural systems to learn to decode information embedded in distributed patterns of spike synchrony.
Filed under: Neuroscience
Not many blog entries recently because I’ve been (dare I say it) working… and ill too. Anyway, I found some rather cool visual illusions (not entirely unconnected with work), and I thought I’d share some of the best. Lots more on this page.
None of these pictures are animated. The last one is a little subtle compared to the first two.