The Samovar


The science-technology fallacy
August 31, 2008, 12:47 am
Filed under: Academia, Epistemology, Neuroscience, Philosophy | Tags:

On the comp-neuro mailing list, James Schwaber writes about the science-technology fallacy:

A lot of the discussion about the ‘right way to model’ or what to model may be a version of what my friend Mike Gruber has termed a version of the science-technology fallacy, the idea that you have to understand a process analytically before you can use it, and he always quotes Carnot here–thermodynamics owes more to the steam engine than ever the steam engine owes to thermodynamics. Obviously, humans used and controlled fire for 100,000 years before Lavoisier explained what fire was, and planes flew for decades before there was a theory to explain how they did it. In fact theory ‘demonstrated’ that heavier-than-air flight was impossible.

This is an interesting point. Is it true? If so, what is the point of science?

I have some ideas, but I’d like to see what others have to say.

Comments Off on The science-technology fallacy


Belief and Pragmatism: God, ideals and addiction

Recently I wrote a post that provoked much disagreement, claiming that almost nobody believes in God. On further reflection, I think I can make a clearer statement (although with a less punchy headline sadly). A less divisive way of putting it might be, what should we make of statements like

He believes X but acts as if he doesn’t,

and

He believes X but doesn’t act consistently with those beliefs.

These are quite curious statements, especially the first. I find these statements interesting because I want to try to interpret them as a Pragmatist. Briefly, a Pragmatist tries to only talk about things to the extent that they have real world consequences, and tries to give things meaning relative to those consequences. From this point of view, if we take the first statement as accurate, then we have a real problem. The first clause says that he believes X, but the second denies any connection between X and his actions. To define a pragmatic (I’ll use small p from here on) conception of ‘belief’ would mean defining it in terms of its consequences, but the statement says there are none.

This might seem like a fairly irrelevant problem, but I think it’s more significant than it seems because people make statements like this all the time. I’ll illustrate with three examples:

  1. The “champagne socialist” claims to believe in an ideal of equality, but acts to get as much as he can for himself.
  2. The theist who regularly sins and just generally doesn’t seem to pay anything more than lip service to his faith.
  3. The drug addict, can see clearly that his habit will kill him but keeps on doing it anyway.

The first two seem problematic to me, and the third less so although it shares many similar features.

Now at this point I should say that you could easily disagree that these are problematic. For example, most people aren’t pragmatists and so wouldn’t see the difficulty in finding a pragmatic definition of belief to be a problem. For my part, I feel that something like pragmatism is inescapable – if we’re to make sense then what we talk about has to be grounded in experience. William James sometimes referred to pragmatism as “radical empiricism”, which seems appropriate.

In the entry I wrote about belief in God, I took a point of view which I think is roughly equivalent to what Dennett calls the Intentional Stance. Although I must say I haven’t read his book so I might be mistaken. As I understand it, the intentional stance says: hypothesise that an individual is acting rationally with respect to his beliefs and desires. We can’t know what the beliefs and desires are (they’re internal), but we can see the actions and from that attempt to infer what the beliefs and desires are. Even though we know that individuals are not rational, thinking about in this way might tell us something useful.

Because I’ve been making Powerpoint slides for presentations recently, I couldn’t help but turn this idea into one of those silly diagrams with boxes and arrows. In the diagram ‘internal model’ refers to the model of the world that an individual has, which is continually updated according to their experiences. The ‘plan’ box gets arrows from ‘goals’ and ‘internal model’ because it is a rational maximiser attempting to find actions which, at least in the individuals internal model of the world, would best fit its goals.

Model 1

Model 1 – The intentional stance?

I think this way of looking at things can tell us a lot, but it has some difficulties with the example of the drug addict. Now the drug addict doesn’t appear to act as if he believes his habit will damage him, although he says he knows it will, wishes to stop, etc. His claims about what he claims to believe also seem fairly uncontroversial in some sense, and are entirely about real world things with real world consequences (unlike the case of belief in God). One response would be to say that his goals are different to what you might think: an addict has the overriding goal of getting high. The trouble with this is that for this to be right, the goals must have been changed by his previous actions. If the individual’s goals as well as his internal model can be changed, then it becomes a lot more difficult to infer anything about the goals or beliefs from the actions. In particular, what is to stop us saying that everyone’s ‘goal’ is to do the actions that they actually do? This fits with the intentional stance but tells us next to nothing. I think I know what the response to this criticism would be: that that would be to multiply the number of ‘goals’ unnecessarily (one for each action). Fair enough, but it does point to a genuine problem with that theory I think.

In the case of addiction, we actually have quite a good scientific theory as to what happens. Taking addictive drugs releases chemicals into the brain which mess up the reward signals that our brain uses as part of its decision making process. It shouldn’t be a surprise that injecting chemicals into the part of us that makes decisions might mess up that process. Now should this be described as messing with our goals and desires, or as messing with our rationality?

Returning to the first two examples of the champagne socialist and the sinning theist, I think most people would say that they really do believe what they say they believe, but that those beliefs don’t directly determine the individual’s actions. This seems a reasonable point of view, but it needs some work to make it more precise. My first attempt is this boxes and arrows diagram:

Model 2

Model 2 – The adviser to the king

In the top left we have a box with little boxes inside it. The large box represents in some sense the conscious part of this individual’s mind. He has a conscious model of the world, conscious goals, and from this he can formulate a conscious plan. However, in the larger scheme of things, these consciously determined plans don’t have the final say. There is also an unconscious scheme: an unconscious internal model of the world, unconscious goals, and a plan based on these unconscious elements in addition to the conscious ones. It is this unconscious bit that has the final say, and the results of actions taken feed back into both the conscious and unconscious internal models of the world. In this model, the conscious part is in some sense acting as an adviser to the real decision maker, the unconscious part. Relative to this scheme, we can say that the beliefs of the conscious part (the internal model inside the top left box) are that individual’s ‘beliefs’. This could explain why an individual could be capable of ‘believing’ one thing but acting in contradiction to that belief: the unconscious planner is just overriding the suggestions of the conscious adviser.

This model can also explain a lot. You could say that the conscious module at the top left is a sophisticated, rational reasoner, capable of using logic, deduction, etc., whereas the unconscious decision maker uses much cruder rule, something along the lines of: do what’s worked well in the past according to how much dopamine is sloshing around my brain afterwards. This would obviously explain the drug addict example where the unconscious decision maker is getting directly messed with chemically. It would also explain the sinning theist and champagne socialist: the unconscious decision maker with the real power has just realised that the high and mighty ideals of the conscious module don’t make it happy, whereas sex and champagne do.

So this scheme is nice, but it has a different set of problems. The first is that it is much more empirical than the intentional stance scheme. Who knows if this is how the brain really makes its decisions or not? Further neuroscientific research may tell us how we really make decisions, but wouldn’t we like to be able to say something meaningful without waiting for that (which may easily be many decades coming)? In general, ‘belief’ clearly relates to an internal state and therefore a definition of it would seem to have to relate to a model of human thought and behaviour. Or is there another neutral way of defining it? I haven’t got one. Any suggestions?

The second problem is more philosophical, and relates to how we use a term like belief. Suppose the model of decision making in the adviser model was accurate, does it make sense to say that the individual as a whole ‘believes’ what are really just the beliefs of one part of it? Or is this just overemphasising the conscious part of belief? Perhaps we need a new vocabulary of belief that makes this distinction clearer? Or perhaps we should just abandon the word entirely?

I propose that instead we always bear in mind the limit of applicability of a concept. Most concepts are useful when used in certain contexts, but break down at certain edge cases (like for example, the concept of ‘inside’ breaks down at a quantum level when objects can jump between positions without going through intermediate ones). From this point of view then, we could say that the concept ‘belief’ has some everyday uses, but that we should always have in mind the pragmatic limit of applicability. Things like belief in gravity or belief about some observable facts, which people act consistently with almost all of the time, could be still used unproblematically, but talk of belief in ideals or belief in God should raise alarm bells because we know that these beliefs will not inform us as to individuals actions. An alternative conclusion to my previous controversial entry would then be: belief in God is beyond the pragmatic limit of applicability of the concept ‘belief’.



Your chance to become part of scientific history!
December 1, 2007, 10:20 pm
Filed under: Academia, Neuroscience | Tags: , , ,

I’m part of a small group working on a new scientific endeavour, and we need your help!

We’re writing a new piece of software to simulate the behaviour of networks of neurons (nerve cells in the brain). Well, we’ve solved the differential equations, worked out the technical problems, written the code, but now we’re stuck. We’ve hit a barrier, our own CDD (creativity deficit disorder).

We need a name.

And it needs to be so damned snappy that as soon as someone hears it they’ll want to download our software and stop using the competition (with names like Neuron, XPP and Nest).

So can you help? Your scientific community needs you!

Boring technical details follow for those who are interested (not necessary for thinking up a cool name): The software is going to be written in Python, with some bits possibly in C++, using SciPy and vectorised code for efficient computations. The emphasis is on the code being easy to use for people without much experience in programming, and easy to extend. Initially, the software will focus on networks of simple model neurons rather than detailed anatomical models (although we might get round to adding that later).



From playing games to the nature of knowledge
August 1, 2007, 1:36 pm
Filed under: Academia, Epistemology, Games, Mathematics, Neuroscience, Philosophy | Tags: , ,

I’ve been reading some interesting things about games, computers and mathematical proof recently. A couple of months ago, it was announced that the game of checkers (or draughts) had been “solved”: if both players play perfectly then the game ends in a draw. That’s sort of what you’d expect, but it’s not entirely obvious. It might have been the case that getting to go either first or second was a big enough advantage that with perfect play either the first or second player would win. So for example in Connect Four, if both players play perfectly then the first player will always win.

Checkers is the most complicated game to have been solved to date. The number of possible legal positions in checkers is 1020 (that is a one followed by twenty zeroes). By comparison, tic-tac-toe has 765 different positions, connect four has about 1014, chess about 1040 and Go about 10170 (some of these are only estimates).

There’s a strange thing about the terminology used. A game being “solved” doesn’t mean that there’s a computer program that can play the game perfectly. All it means is that they know that if the players did play perfectly, then the game would end in a certain way. So for example with checkers, it might be the case that you could beat the computer program Chinook (which was used to prove that perfect play ends in a draw). Counterintuitively, the way to do this would be the play a move that wasn’t perfect. The number of possible positions for checkers is too large for them to have computed what the best move is in every single one. They limited the number of computations they had to perform by using mathematical arguments to show that certain moves weren’t perfect without actually having to play through them. So, by playing a move that you knew wasn’t perfect (which means that if you played it against a perfect opponent you would certainly lose), you would force the computer into a position it hadn’t analysed completely, and then you might be able to beat it.

This is a bit like in chess, where a very good player can beat a good computer program by playing a strategy that exploits the way the computer program works. Chess programs work by looking as many moves ahead as possible and considering what the player might do and what the computer could do in response, etc. However, the combinatorial complexity of the game means that even the fastest computers can only look so many moves ahead. By using a strategy which is very conservative and gives you an advantage only after a large number of moves, you can conceal what you’re doing from the computer which has no intuitive understanding of the game: it doesn’t see the advantage you’re working towards because it comes so many moves in the future.

So at the moment, there is no perfect player for either chess or checkers, but the top computer programs can beat essentially any opponent (in chess this is true most of the time but possibly not every time). This raises the question: how would you know if you had a computer program that played perfectly? For games like chess and checkers, the number of possible positions and games that could be played is so enormous that even storing them all in a database might take more space than any possible computer could have (for instance the number of possible positions might be more than the number of atoms in the universe). Quantum computation might be one answer to this if it ever becomes a reality, but an interesting suggestion was recently put forward in a discussion on the foundations of mathematics mailing list.

The idea is to test the strength of an opponent by allowing a strong human player to backtrack. The human player can take back any number of moves he likes. So for example, you might play a move thinking it was very clever and forced your opponent to play a losing game, but then your opponent plays something you didn’t think of and you can see that actually your move wasn’t so great after all. You take back your move and try something different. It has been suggested that a good chess player can easily beat most of the grand master level chess programs if they are allowed to use backtracking. The suggestion is that if a grand master chess or checkers player was unable to beat the computer even using backtracking, then the computer is very likely perfect. It’s not a mathematical proof by any means, but the claim is that it would be very convincing evidence because the nature of backtracking means that any weaknesses there are in the computer program become very highly amplified, and any strengths it has become much weakened. If it could still beat a top player every time, then with very high probability there are no weaknesses to exploit in it and consequently it plays perfectly. (NB: My discussion in this paragraph oversimplifies the discussion on the FOM list for the sake of simplicity and brevity, but take a look at the original thread if you’re interested in more.)

This leads to all sorts of interesting questions about the nature of knowledge. At the one end, we have human knowledge based on our perceptions of the world and our intuitive understanding of things. Such knowledge is obviously very fallible. On the other end, we have fully rigorous mathematical proof (which is not what mathematicians do yet, but that’s another story). In between, there is scientific knowledge which is inherently fallible but forms part of a self-correcting process. Scientific knowledge always get better, but is always potentially wrong. More recently, we have probabilistic knowledge, where we know that something is mathematically true with a very high probability. Interactive proof systems are an example of this.

The argument above about backtracking in chess suggests a new sort of knowledge based on the interaction between human intuition and computational and mathematical knowledge. These newer forms of knowledge and the arguments based on them are very much in accord with my view of the pragmatic nature of knowledge. My feeling is that interactions such as this between computational, non-intuitive knowledge, and human intuitive understanding will be very important in the medium term, between now and when artificial intelligence that is superior to our own both computationally and intuitively becomes a reality (which is I feel only a matter of time, but might not be in my lifetime). At the moment, these new forms of knowledge are really only being explored by mathematicians and computer scientists because the human side is not yet well enough understood. It will be interesting to see how this interaction between human and computational/mathematical understanding will develop as our understanding of how the human brain works improves.

If you found this entry was interesting, you might also be interested in two unfinished essays I have on epistemology without truth and the philosophy of mathematics.



… and just so you know I’m serious
April 18, 2007, 12:24 am
Filed under: Books, Neuroscience

Hmm, it strikes me that three posts in one night (and at least four in a row) about food suggest I’m a rather frivolous person. Well, hey! I’ve been on holiday for a week. What can I say? Anyway, in partial mitigation of my consumption-related posting:

1.

I have a plan, and that plan is to read a book by every Nobel prize for literature winning author. (Well not the poetry. I can’t hack poetry for some reason.) So… opinions? Pointless exercise or not? Any of them I should particularly prioritise or avoid? Will I even be able to get copies of books by all of them? Some of them seem pretty obscure.

So far, I’ve only read 16 of them. How many have you read? (This could be a meme, but this is a very serious post, so let’s not make it one.)

2.

Before I wrote these entries I read this paper:

The timing of action potentials in sensory neurons contains substantial information about the eliciting stimuli. Although the computational advantages of spike timing–based neuronal codes have long been recognized, it is unclear whether, and if so how, neurons can learn to read out such representations. We propose a new, biologically plausible supervised synaptic learning rule that enables neurons to efficiently learn a broad range of decision rules, even when information is embedded in the spatiotemporal structure of spike patterns rather than in mean firing rates. The number of categorizations of random spatiotemporal patterns that a neuron can implement is several times larger than the number of its synapses. The underlying nonlinear temporal computation allows neurons to access information beyond single-neuron statistics and to discriminate between inputs on the basis of multineuronal spike statistics. Our work demonstrates the high capacity of neural systems to learn to decode information embedded in distributed patterns of spike synchrony.

Good stuff.



Illusions
December 10, 2006, 7:11 pm
Filed under: Neuroscience

Not many blog entries recently because I’ve been (dare I say it) working… and ill too. Anyway, I found some rather cool visual illusions (not entirely unconnected with work), and I thought I’d share some of the best. Lots more on this page.

None of these pictures are animated. The last one is a little subtle compared to the first two.

snake.jpg

rollers.jpg

rotrays.jpg