The Samovar


What makes science work?
November 5, 2010, 3:30 am
Filed under: Academia, Epistemology, Manifesto, Philosophy, Science | Tags: ,

You often hear people responding to the claims of homeopathists and the like with a request to see the peer-reviewed literature supporting their claims. The idea is that peer review is a unique characteristic of proper science that is essential to make science work. I recently read a history of peer review in science (subscription to Trends in biotechnology is unfortunately required) that throws some doubt on this. Fascinatingly, peer review didn’t really take off until 1959 with the invention of the photocopier: before then, it was just too expensive to make copies of the papers to send out for peer review. There was something like peer review before then, in that the editor would ask the opinions of friends and colleagues, but this was far from the standard we have for peer review today. A few journals would make the effort to make copies of papers and send them out for review, but it was rare.

Since science before 1959 was, in fact, extremely successful, and able to develop very well, it cannot be the case that peer review is an essential component of science. That’s not to say we should do away with it: it serves an important function today, essentially that of reducing the workload of the editor. So many papers are submitted for publication that some method for choosing which papers should and shouldn’t be published is necessary, and the reviews help the editor make that decision. What the exact function of peer review today ought to be, and how it should evolve, was the subject of an interesting debate in Nature in 2006 (no subscription needed for this one, I think).

So the people demanding peer-reviewed papers from the homeopathists should perhaps question what it is exactly that they’re doing. My guess is that they use it as an easy, but wrong way to distinguish science from pseudoscience. It makes for a good quip. The danger, though, as argued in this article, is that in fact it’s easy to cherry-pick bad ‘scientific’ papers that have passed the peer-review process. By using peer review as the ‘gold standard’ of scientific enquiry, they allow the justification of all sorts of nonsense.

This poses a problem though: if it’s not peer-review, what is it that makes science work? There have been many theories about this, most focussing on methodological aspects of scientific enquiry, for example Popper’s falsifiability criterion. I want to suggest an alternative point of view that doesn’t focus on methodology:

Science is the study of problems that can be addressed with the techniques available to us.

Techniques here should be understood to include both technology and, for example, mathematical or statistical tools. In this view then, physics is the ‘hardest’ science because the problems it looks at are the easiest. The variables in physical problems are much less interdependent, there are many fewer of them, and their interactions are much simpler than those in, for example, the study of social systems or economics. Our simple, mostly linear mathematical techniques and statistical methods based on independent variables then apply very nicely to physical problems.

Why does science work in this view then? The answer is that science is part of a social process: scientists do science, they are genuinely interested in finding out what happens, and even though individual scientists will sometimes be very wrong, will try to promote a bad theory that doesn’t have much support, or will attack a rival theory, eventually the egotistical motives involved will fade whereas the usefulness of good theories will last. This process may take a generation before change happens, but it eventually works. This process relies on there being at least some bias in favour of good theories over bad ones, even if the bias is small in comparison to the egotistic biases in favour of established theories. However, if available techniques are not good enough to introduce this bias, the good theories won’t win out over the bad ones, even in the long run. Thus, in unscientific subjects we shouldn’t be surprised to see cyclical variations in theories, where the period of the cycle is related to the duration of a scientific career. Young researchers will look at the work of the old researchers with disdain and propose radical alternatives, to which they will become attached. In turn, their theories will be rejected by those that come after them, and so on without ever stabilising.

In a way this is obvious: if there is no evidence to support or reject theories, then we cannot improve them. But it’s important to note that whether there is or is not any evidence to support or reject theories is to a large extent a function of how easy the problem being studied is. The fact that evidence in physics is so much stronger than it is in economics is hardly a reason for scorn of economists.

This point of view on why science works may help scientists look at researchers in other fields both more humbly and more reasonably. Humbly, in that they should realise that the reason their field of study is so advanced is that the problems are so easy, and reasonably, in that it should help them to see more accurately why it is that some fields make better progress than others, without getting distracted by methodology.

Comments Off on What makes science work?


Arationality and honesty

Perfect rationality is impossible, and the limit of the scope of the concept of rationality is important. I start from an observation that many would not agree with, that there is no such thing as truth (which I’ve argued elsewhere to some extent). It’s just a heuristic concept that helps us to function in everyday situations. Dropping the notion of truth involves us in some considerable difficulties, which I discuss in the previous link, but these difficulties are not insurmountable. It is possible to have a useful conception of epistemology free from the notion of truth. In this entry, I criticise the idea that we can be ultimately rational, and look at the consequences of taking this seriously for ethics and morality.

Epistemology in some sense is a specific form of rationality, it concerns only thoughts and ideas whereas rationality is supposed to also encompass actions. An action can certainly be instrumentally rational. Someone is thirsty, so they pick up a glass and turn on the tap and drink – this is instrumentally rational, rational with respect to a given set of goals which are not in themselves analysed. But actions cannot be rational in and of themselves, they must be relative to a set of ends, and ultimately these ends cannot be described in terms of rationality. In consequence, people cannot be rational (ultimately). One consequence is that Vulcans couldn’t exist – you cannot act by logic alone. I call the aspects of our behaviour that cannot be analysed in terms of rationality, arational. This is in distinction to irrationality, which is about doing the opposite of what rationality dictates. Examples of arationality abound: emotions, tastes, etc. But also at the boundary, things like the fact that we keep breathing rather than just stopping.

So can we analyse the arational? To a certain extent yes, we can say more about it than nothing, but there are no complete answers. Later, this leads on to the ethical concepts of honesty and responsibility which I believe are related to arationality.

To start with, let’s take the trivial observation that we humans are nothing so special. We’re essentially “meat machines”, machines built by our genes to replicate themselves (this too is a simplification, but bear with me). We’re built on a physical substrate subject to physical laws. It’s surprising that we can do anything like thinking at all. It’s instructive to think about the extent to which we could call the behaviour of other animals as rational. Is a dog rational? What about an amoeba?

So given that, rather than talk using high level concepts like “reasons” (X believed Y and so took action Z) that presumably are supposed to be understood in some undefined way as related to the internal state of the central nervous system, I’m just going to talk about decisions which can be analysed externally. We can say in some way unambiguously that individual X took action Z, they made a decision to take that action. The decision need not be conscious, remember we’re not talking about internal states here. So we repeatedly make the “decision” to breathe, just as the amoeba makes “decisions” to extend its pseudopodia, or what have you.

Now this way of looking at things helps us to see what we can or can’t say about arationality. To some extent it can be analysed. Obviously we mostly keep choosing to breath because we would be unsuccessful meat machines if we didn’t, and so our genes wouldn’t be replicated. This isn’t to say that we must do these things, just that you would expect to see that most individuals would make these sorts of decisions most of the time because they’re meat machines formed from recombinations of genetic material that tended to act in this way (there are some assumptions there, but that’s another story). Evolution also gives us a point of view on when we can’t analyse arationality. A new individual, either because of a particular recombination of genetic material or because of a mutation, exhibits a new type of behaviour. This happened for reasons we can understand (maybe just chance), but until the individuals interactions with the environment determine the success or otherwise of the individual in reproducing, we can’t say whether it was a good behaviour or not (from the point of view of the genes). Until that point, the behaviour just is a behaviour, and the individual is just an individual that exhibits that behaviour. What more can be said before the success of the behaviour is tested in the world? In conclusion, looking at arationality in terms of behaviours, we can obviously analyse much of arationality in a scientific way, but ultimately in certain cases all we can say is that such-and-such is the behaviour exhibited by such-and-such individual.

At this point I want to bring in the moral and ethical aspects. We like to think of morality and ethics as being about right and wrong, but just as there is no truth, and just as rationality is not entirely straightforward, there is no such thing as right and wrong. There are only decisions. There are decisions individuals make for themselves, and decisions that a political entity makes for others (social mores, codes of conduct, rules, punishment, etc.). A poor person steals from a rich person, the rich person is so rich they never notice they’ve had something stolen. Has a wrong been done?

Rather than talk about this from the point of view of whether or not the poor person made the right decision, I want to just talk about the types of decisions that have been made here, by whom, and what considerations bear on them. First of all, the thief has made a decision to break the rules. Secondly, the society has made a decision to punish people who are caught breaking the rules. We wouldn’t like to say the thief did wrong because nobody was hurt by the action, and the thief’s life was made better as a consequence. On the other hand, that doesn’t mean the decision was necessarily right because if it was right then surely the society would be wrong to make the decision to punish people who are caught. It’s clear that talking about this case in terms of right and wrong is a surefire way to end in confusion. Instead it’s a calculus. The society makes its choice to punish thieves because if they didn’t – they believe – there would be a breakdown of order. The thief makes the decision to break the rules knowing the decision of society, and must take responsibility for this action. If they are caught, they will face punishment, if not then they won’t.

Suppose now that the thief was a rich person stealing from a poor person. The analysis above seems unchanged, and indeed it is. One thing that may change is that the society may choose to allocate its resources differently towards catching the one or the other sort of thief. For example, a society may decide to put more resources into catching poor thieves stealing from the rich than rich thieves stealing from the poor, or it may do it the other way round. That’s politics. In my view, society today tends more towards the former whereas it ought to tend more toward the latter, and I make political decisions based on that. These are my decisions, which are ultimately arational. I could put forward reasons for this view, but those are ultimately judged on arational criteria. Others may differ.

Equating the identity of an individual with the actions they decide to take in the circumstances in which they find themselves gives us a useful way of looking at two problems: free will, and morality. There is a classical problem which is that free will cannot be consistent with determinism (if an action was determined by physical laws it cannot have been freely chosen because it couldn’t have been otherwise). There is an extension that says that all decisions must either be determined or random. It goes like this. If an action wasn’t determined by physical laws, then it would effectively meet the physical definition of randomness. In exactly the same circumstances (including the experiences, desires, preferences, state of mind, etc. of the individual concerned at the time of making the decision) you could have different outcomes, making the decision effectively meaningless (not dependent on anything at all), or random in short. However if we identify an individual with the decisions they make it doesn’t matter whether they are determined (or random), they are still the decisions of that individual (it is just that the identity of the individual is also determined). A forced (unfree) choice would be one that no individual in the same circumstances could have made differently (e.g. you cannot choose to ignore the force of gravity).

This last point has a moral and ethical component. If we accept all the choices we actually make are not forced in this sense, then we have to take greater responsibility for them. An ugly choice that we were put under great psychological pressure to make is still our own choice because it is we ourselves who are choosing to respond to that psychological pressure. It is not an external thing acting on us in the same way that gravity is. Even if we were offered the choice between one option which would lead to our death, and another option, it’s still a free choice because we are free to choose how to weight the significance of our own death. Evolutionary processes explain why so many people will weight the significance of their own death so highly, but since the identity of the individuals themselves is the output of that process, we cannot consider that process as an external force acting on us.

The general point here is one of taking responsibility for one’s own actions, and being honest about their status as one’s own actions. Often, we attempt to excuse our actions by giving the reasons why we took them, as if these reasons were themselves external forces acting on us which we couldn’t ignore. However, as we have seen, an action itself cannot be rational, it can only be instrumentally rational with respect to an arational core. We rarely think to deeply analyse our own arational cores, but the considerations given here suggest we ought to be more aware of them and identify with them more explicitly. It may be that to do so, we must become more aware of our own logical inconsistency (even incoherency). We are often in the situation, for example, of wanting a thing and also wanting not to want it or even believing that we don’t want it. If we make the mistake of thinking of ourselves as having a core identity that is coherent and consistent in some sense, then we are inevitably led into confusion. It may be this that underlies the phenomenon of cognitive dissonance.

Following this logic through and acting on it is actually a very difficult thing to do. It means really coming to terms with the inconsistency of our very identities (which the word alone suggests the difficulty of). It means realising that much of what we do we do without reason (our arational cores), and that we  have made choices that we both like and dislike not by mistake or because external forces acted on us, but because that is our nature. It means taking responsibility for every choice we have made, being honest about them, and analysing ourselves. Self-analysis is unavoidable if we do not have a consistent and coherent unitary core (and can be done by introspection and by looking at our choices and identifying those choices with ourselves). Finally it means living with all that inconsistency.

In particular, it can be very difficult to honestly appraise the inequality of society, our own place in it, and live with that. Most reading this will have been the recipients of more than their fair share of luck and will have benefited disproportionately from the work done by everyone. Born into (relatively) wealthy families, receiving (relatively) good educations, etc. It would be easy to fall into the habit of thinking, as many do, that we deserve what we have because that is an easier idea to live with. Choosing not to engage in this sort of self deception requires us to honestly face up to our arational cores, and the experience may not be pleasant. Why do we not give away all our wealth to those who are in need? If we even asked ourselves the question, we would probably find some reason that explained how it couldn’t be otherwise. Perhaps the prospective recipients of our wealth wouldn’t be able to make correct use of it, perhaps charities are essentially corrupt and wasteful, etc. I don’t want to say that we don’t do this because we are selfish. It is more like the choice to keep breathing, there is no need to find a reason for it, it is just a decision we keep making. The danger in finding a reason why we don’t give all our wealth away to others who need it is that it may stop us from giving any away. If there were a good reason why we shouldn’t give our money away, then presumably we shouldn’t give any away. Similarly, if there were a good reason why we should give our money away, we probably ought to give almost all of it away. If we must act by the one sort of reason or another, then we’re faced with the choice of giving away all or nothing, and most would give nothing in that situation.

This may explain why the poor tend to give more money away than the rich. Suppose the choice were not between all or nothing, but between nothing and everything except the minimum necessary for my own survival and that of my family, dependents, etc. For the rich, this choice would then be between nothing and maintaining their lifestyle, or of changing their lifestyle to one of poverty. For someone who is already poor, it wouldn’t involve any change of lifestyle to give away enough money to leave them poor as they already are. By choosing to live by the idea that we do what we do because we have reason for doing so, we put ourselves in the absurd situation that those for whom it would be easiest to give are least likely to do so. The alternative is to say that the amount we choose to give away is our own choice and is not dictated to us by our reason. To take responsibility for the choice, and not to try to pretend that we are acting by a coherent code that dictates our behaviour.

Knowledge, here particularly self-knowledge, is always better than delusion, even when it hurts. When we allow ourselves to be deluded, things always end worse than when we are clear and honest about what is going on. There are many applications of these ideas: religion in this view is obviously problematic because it attempts to externalise our moral choices (indeed, our wish to externalise them may explain why religions are so prevalent); much ethical and moral philosophy, secular or otherwise, is problematic for the same reason, it supports the pretence that there is a coherency or could be such a thing, which stops people from coming to terms with the lack of it. Finally, I want to focus on just one more example, propaganda.

In a previous entry I talked about Jacques Ellul’s book “Propaganda” and the idea that intellectuals are most subject to propaganda because they want to believe that they understand the world, but lacking sufficient time to really do so they rely on answers provided to them by others (putting them in the power of those others). The other aspect of propaganda is what Ellul calls “integration propaganda”. The idea is that once you have participated in an action, you will rationalise that action and create a justification for why it was the right thing to do. The propagandist only needs to get you to participate in an action and you will do the intellectual reorganisation yourself. This is an aspect of propaganda that most people don’t understand (believing that propaganda is just a way of getting people to believe something by repeatedly saying it, or some other such simplification). This is essentially a form of cognitive dissonance: nobody wants to consider themselves the bad guy. Or in the framework of this essay, people want to think of themselves as coherent and consistent, so if they took an action they must have had reason for doing so. Recognising that we are not consistent, rational beings working to some perhaps unknown moral code then has the potential to free us from integration propaganda. Taking our own arationality and inconsistent as a given, we would no longer feel the same requirement to create a self-justifying rationalisation, and so the propaganda would not have its desired effect.



The science-technology fallacy
August 31, 2008, 12:47 am
Filed under: Academia, Epistemology, Neuroscience, Philosophy | Tags:

On the comp-neuro mailing list, James Schwaber writes about the science-technology fallacy:

A lot of the discussion about the ‘right way to model’ or what to model may be a version of what my friend Mike Gruber has termed a version of the science-technology fallacy, the idea that you have to understand a process analytically before you can use it, and he always quotes Carnot here–thermodynamics owes more to the steam engine than ever the steam engine owes to thermodynamics. Obviously, humans used and controlled fire for 100,000 years before Lavoisier explained what fire was, and planes flew for decades before there was a theory to explain how they did it. In fact theory ‘demonstrated’ that heavier-than-air flight was impossible.

This is an interesting point. Is it true? If so, what is the point of science?

I have some ideas, but I’d like to see what others have to say.

Comments Off on The science-technology fallacy


Belief and Pragmatism: God, ideals and addiction

Recently I wrote a post that provoked much disagreement, claiming that almost nobody believes in God. On further reflection, I think I can make a clearer statement (although with a less punchy headline sadly). A less divisive way of putting it might be, what should we make of statements like

He believes X but acts as if he doesn’t,

and

He believes X but doesn’t act consistently with those beliefs.

These are quite curious statements, especially the first. I find these statements interesting because I want to try to interpret them as a Pragmatist. Briefly, a Pragmatist tries to only talk about things to the extent that they have real world consequences, and tries to give things meaning relative to those consequences. From this point of view, if we take the first statement as accurate, then we have a real problem. The first clause says that he believes X, but the second denies any connection between X and his actions. To define a pragmatic (I’ll use small p from here on) conception of ‘belief’ would mean defining it in terms of its consequences, but the statement says there are none.

This might seem like a fairly irrelevant problem, but I think it’s more significant than it seems because people make statements like this all the time. I’ll illustrate with three examples:

  1. The “champagne socialist” claims to believe in an ideal of equality, but acts to get as much as he can for himself.
  2. The theist who regularly sins and just generally doesn’t seem to pay anything more than lip service to his faith.
  3. The drug addict, can see clearly that his habit will kill him but keeps on doing it anyway.

The first two seem problematic to me, and the third less so although it shares many similar features.

Now at this point I should say that you could easily disagree that these are problematic. For example, most people aren’t pragmatists and so wouldn’t see the difficulty in finding a pragmatic definition of belief to be a problem. For my part, I feel that something like pragmatism is inescapable – if we’re to make sense then what we talk about has to be grounded in experience. William James sometimes referred to pragmatism as “radical empiricism”, which seems appropriate.

In the entry I wrote about belief in God, I took a point of view which I think is roughly equivalent to what Dennett calls the Intentional Stance. Although I must say I haven’t read his book so I might be mistaken. As I understand it, the intentional stance says: hypothesise that an individual is acting rationally with respect to his beliefs and desires. We can’t know what the beliefs and desires are (they’re internal), but we can see the actions and from that attempt to infer what the beliefs and desires are. Even though we know that individuals are not rational, thinking about in this way might tell us something useful.

Because I’ve been making Powerpoint slides for presentations recently, I couldn’t help but turn this idea into one of those silly diagrams with boxes and arrows. In the diagram ‘internal model’ refers to the model of the world that an individual has, which is continually updated according to their experiences. The ‘plan’ box gets arrows from ‘goals’ and ‘internal model’ because it is a rational maximiser attempting to find actions which, at least in the individuals internal model of the world, would best fit its goals.

Model 1

Model 1 – The intentional stance?

I think this way of looking at things can tell us a lot, but it has some difficulties with the example of the drug addict. Now the drug addict doesn’t appear to act as if he believes his habit will damage him, although he says he knows it will, wishes to stop, etc. His claims about what he claims to believe also seem fairly uncontroversial in some sense, and are entirely about real world things with real world consequences (unlike the case of belief in God). One response would be to say that his goals are different to what you might think: an addict has the overriding goal of getting high. The trouble with this is that for this to be right, the goals must have been changed by his previous actions. If the individual’s goals as well as his internal model can be changed, then it becomes a lot more difficult to infer anything about the goals or beliefs from the actions. In particular, what is to stop us saying that everyone’s ‘goal’ is to do the actions that they actually do? This fits with the intentional stance but tells us next to nothing. I think I know what the response to this criticism would be: that that would be to multiply the number of ‘goals’ unnecessarily (one for each action). Fair enough, but it does point to a genuine problem with that theory I think.

In the case of addiction, we actually have quite a good scientific theory as to what happens. Taking addictive drugs releases chemicals into the brain which mess up the reward signals that our brain uses as part of its decision making process. It shouldn’t be a surprise that injecting chemicals into the part of us that makes decisions might mess up that process. Now should this be described as messing with our goals and desires, or as messing with our rationality?

Returning to the first two examples of the champagne socialist and the sinning theist, I think most people would say that they really do believe what they say they believe, but that those beliefs don’t directly determine the individual’s actions. This seems a reasonable point of view, but it needs some work to make it more precise. My first attempt is this boxes and arrows diagram:

Model 2

Model 2 – The adviser to the king

In the top left we have a box with little boxes inside it. The large box represents in some sense the conscious part of this individual’s mind. He has a conscious model of the world, conscious goals, and from this he can formulate a conscious plan. However, in the larger scheme of things, these consciously determined plans don’t have the final say. There is also an unconscious scheme: an unconscious internal model of the world, unconscious goals, and a plan based on these unconscious elements in addition to the conscious ones. It is this unconscious bit that has the final say, and the results of actions taken feed back into both the conscious and unconscious internal models of the world. In this model, the conscious part is in some sense acting as an adviser to the real decision maker, the unconscious part. Relative to this scheme, we can say that the beliefs of the conscious part (the internal model inside the top left box) are that individual’s ‘beliefs’. This could explain why an individual could be capable of ‘believing’ one thing but acting in contradiction to that belief: the unconscious planner is just overriding the suggestions of the conscious adviser.

This model can also explain a lot. You could say that the conscious module at the top left is a sophisticated, rational reasoner, capable of using logic, deduction, etc., whereas the unconscious decision maker uses much cruder rule, something along the lines of: do what’s worked well in the past according to how much dopamine is sloshing around my brain afterwards. This would obviously explain the drug addict example where the unconscious decision maker is getting directly messed with chemically. It would also explain the sinning theist and champagne socialist: the unconscious decision maker with the real power has just realised that the high and mighty ideals of the conscious module don’t make it happy, whereas sex and champagne do.

So this scheme is nice, but it has a different set of problems. The first is that it is much more empirical than the intentional stance scheme. Who knows if this is how the brain really makes its decisions or not? Further neuroscientific research may tell us how we really make decisions, but wouldn’t we like to be able to say something meaningful without waiting for that (which may easily be many decades coming)? In general, ‘belief’ clearly relates to an internal state and therefore a definition of it would seem to have to relate to a model of human thought and behaviour. Or is there another neutral way of defining it? I haven’t got one. Any suggestions?

The second problem is more philosophical, and relates to how we use a term like belief. Suppose the model of decision making in the adviser model was accurate, does it make sense to say that the individual as a whole ‘believes’ what are really just the beliefs of one part of it? Or is this just overemphasising the conscious part of belief? Perhaps we need a new vocabulary of belief that makes this distinction clearer? Or perhaps we should just abandon the word entirely?

I propose that instead we always bear in mind the limit of applicability of a concept. Most concepts are useful when used in certain contexts, but break down at certain edge cases (like for example, the concept of ‘inside’ breaks down at a quantum level when objects can jump between positions without going through intermediate ones). From this point of view then, we could say that the concept ‘belief’ has some everyday uses, but that we should always have in mind the pragmatic limit of applicability. Things like belief in gravity or belief about some observable facts, which people act consistently with almost all of the time, could be still used unproblematically, but talk of belief in ideals or belief in God should raise alarm bells because we know that these beliefs will not inform us as to individuals actions. An alternative conclusion to my previous controversial entry would then be: belief in God is beyond the pragmatic limit of applicability of the concept ‘belief’.



Nobody believes in God

OK, not nobody, but almost nobody.

To believe something, you have to act in a way that is consistent with the belief being true. Otherwise, you’re just saying that you believe it. If someone tells you that twiglets are highly toxic and will kill you instantly, at the same time as munching a bag full of them, you’re likely to doubt they really believe it. Same thing if they told you that it would lead you to an eternity of damnation. You wouldn’t trade in the brief pleasure of eating a bag of twiglets for an eternity of damnation if you really believed in it. But this is exactly the situation of people claiming to believe in God whilst simultaneously doing things all the time that are inconsistent with it being true. Anyone who believes in hell but sins anyway – they don’t really believe in hell. Someone who believes in the teaching of Jesus, but also thinks that capitalism is a great idea – doesn’t really believe in Jesus’ teachings at all. And so on.

Now at this point, a Catholic will come along and say: you don’t necessarily go to hell if you sin, as long as you repent afterwards. But… if you sin planning to ‘repent’ afterwards, that doesn’t count (so I’m told). Well, I bet quite a lot of that goes on if people were honest with themselves. It seems to me that if you really believed in God, you wouldn’t try to sneak stuff by on a technicality. If you have any respect for the concept at all, you’ve surely gotta believe that He is wise to that.

In fact, when a religious rule is inconvenient, it tends to be ignored, or the meaning of it changed. In a capitalist society, the stuff that is antithetical to the pursuit of wealth is ignored. In a liberal society, the stuff about stoning adulterers and homosexuals is ignored. Conversely, in an illiberal one the stuff about loving your neighbour and turning the other cheek is ignored.

When it comes to a clash between what religion says you should do, and what is convenient to do in real life, convenience wins out over religion almost every time. Or in other words, the reason that there are so many adulterous affairs is that people don’t give any credence to the idea that they will be eternally punished for it in the afterlife (no shag is good enough to warrant infinite and everlasting pain as a consequence, surely?). In practice they behave, quite sensibly, as if the notions of religion were false. And for these reasons, I think it’s fair to say that most people don’t believe in God.

The meaning of ‘belief’

I suppose to make my case a bit more convincing I need to say something about the meaning of the word ‘belief’. Three obvious possibilities come to my mind when trying to define what belief might mean, someone believes something if:

  1. They say they believe it.
  2. They act in a way that is consistent with it being true.
  3. They are in some internal state correlative with the concept ‘belief’.

The twiglet example shows that (1) isn’t good enough, and it’s not clear that (3) has any meaning although it’s obviously compelling in some way. So for me, I have to go with (2), although I’d modify it slightly. I would say that to believe something is, roughly speaking, to act in accordance with a mental model of the world in which the proposition is true. I prefer this way of talking about it because it deals with the difficulty of defining what is or isn’t true (you can define truth or falsity of a proposition relative to a model without having to define it for the real world), and it gives a slightly more precise idea of what sorts of actions would count as consistent (i.e. those that are made by some decision-making procedure based on a mental model relative to which the proposition is true). This definition has its difficult points too, but I think it’s a helpful starting point at least.

In my experience of explaining this idea to people, there are various sticking points that stop people from agreeing that nobody believes in God. For starters, it seems kind of rude to suggest all these people are saying they believe in God but don’t really. Well, maybe that is rude, but is it any ruder than saying that one of their fundamental beliefs is wrong and that their view of the world is completely warped? I don’t think so, but even if it is that’s no reason not to say it. I think a more fundamental sticking point is that most people tend to have some sort of mixture of definitions (1) and (3) in their minds when asked about what belief means. If there is a mental state correlative to ‘belief’ – and introspection and intuition says there is – then surely the best person to report the status of that mental state is the person concerned. All very democratic, but people are often very bad at introspection and may themselves think that the fact that they are saying something without attempting to deceive means they believe it. The problem with that is: what about the unconscious?

The last sticking point is perhaps the most interesting of all, that in many ways it seems as though people do act in a way that is consistent with it being true. They go to church (some of them), they try to avoid sinning too much, they pray, etc. My response to this is that all of these actions are consequences of their believing that they believe, but not their actual believing. And I think that’s not a contradiction. The thing is, our mental models are disjointed fragmentary ones, not grand theories of everything. To get by in the world, we only need incomplete, heuristic models of situations that tend to recur. A mental model of the world in which we act as if we had a mental model of the world in which God exists doesn’t necessarily mean that we do indeed have a mental model of the world in which God exists. Mental models, and decision making procedures based on them, don’t have to be complete or accurate. They don’t need to be deductively complete or consistent, because most of the time we’re not capable of nor interested in making all the deductive conclusions possible from our different fragmentary mental models. In particular, our mental models of ourselves are often quite incredibly wrong. We think “In situation X I would do Y”, but then situation X happens and we do Z, the exact opposite of Y. It happens all the time. So it’s perfectly possible that we can believe that we believe in God, and consequently do all of the things we associate with a person who believes in God, but not actually believe in God (which would if we thought about it deeply enough, entail doing all sorts of things we wouldn’t actually do).

Dennett

With most ideas, someone has already had them before you (often Hume in my experience, the clever bugger), and this is no exception. I haven’t read much Dennett, but it appears he has covered some of the same ground. I’m told that he makes a distinction between belief and opinion that is somewhat akin to what I’m talking about here. I didn’t find anything directly about this (please post a link in the comments if you have a good one), but his article Do Animals Have Beliefs? has this interesting nugget which might have some relevance to the discussion of the three definitions of belief above:

There are independent, salient states which belief-talk ‘measures’ to a first approximation.

I also found this YouTube video of him saying that he doesn’t believe that believers really believe. It’s my first embedded video on this blog, too.



From playing games to the nature of knowledge
August 1, 2007, 1:36 pm
Filed under: Academia, Epistemology, Games, Mathematics, Neuroscience, Philosophy | Tags: , ,

I’ve been reading some interesting things about games, computers and mathematical proof recently. A couple of months ago, it was announced that the game of checkers (or draughts) had been “solved”: if both players play perfectly then the game ends in a draw. That’s sort of what you’d expect, but it’s not entirely obvious. It might have been the case that getting to go either first or second was a big enough advantage that with perfect play either the first or second player would win. So for example in Connect Four, if both players play perfectly then the first player will always win.

Checkers is the most complicated game to have been solved to date. The number of possible legal positions in checkers is 1020 (that is a one followed by twenty zeroes). By comparison, tic-tac-toe has 765 different positions, connect four has about 1014, chess about 1040 and Go about 10170 (some of these are only estimates).

There’s a strange thing about the terminology used. A game being “solved” doesn’t mean that there’s a computer program that can play the game perfectly. All it means is that they know that if the players did play perfectly, then the game would end in a certain way. So for example with checkers, it might be the case that you could beat the computer program Chinook (which was used to prove that perfect play ends in a draw). Counterintuitively, the way to do this would be the play a move that wasn’t perfect. The number of possible positions for checkers is too large for them to have computed what the best move is in every single one. They limited the number of computations they had to perform by using mathematical arguments to show that certain moves weren’t perfect without actually having to play through them. So, by playing a move that you knew wasn’t perfect (which means that if you played it against a perfect opponent you would certainly lose), you would force the computer into a position it hadn’t analysed completely, and then you might be able to beat it.

This is a bit like in chess, where a very good player can beat a good computer program by playing a strategy that exploits the way the computer program works. Chess programs work by looking as many moves ahead as possible and considering what the player might do and what the computer could do in response, etc. However, the combinatorial complexity of the game means that even the fastest computers can only look so many moves ahead. By using a strategy which is very conservative and gives you an advantage only after a large number of moves, you can conceal what you’re doing from the computer which has no intuitive understanding of the game: it doesn’t see the advantage you’re working towards because it comes so many moves in the future.

So at the moment, there is no perfect player for either chess or checkers, but the top computer programs can beat essentially any opponent (in chess this is true most of the time but possibly not every time). This raises the question: how would you know if you had a computer program that played perfectly? For games like chess and checkers, the number of possible positions and games that could be played is so enormous that even storing them all in a database might take more space than any possible computer could have (for instance the number of possible positions might be more than the number of atoms in the universe). Quantum computation might be one answer to this if it ever becomes a reality, but an interesting suggestion was recently put forward in a discussion on the foundations of mathematics mailing list.

The idea is to test the strength of an opponent by allowing a strong human player to backtrack. The human player can take back any number of moves he likes. So for example, you might play a move thinking it was very clever and forced your opponent to play a losing game, but then your opponent plays something you didn’t think of and you can see that actually your move wasn’t so great after all. You take back your move and try something different. It has been suggested that a good chess player can easily beat most of the grand master level chess programs if they are allowed to use backtracking. The suggestion is that if a grand master chess or checkers player was unable to beat the computer even using backtracking, then the computer is very likely perfect. It’s not a mathematical proof by any means, but the claim is that it would be very convincing evidence because the nature of backtracking means that any weaknesses there are in the computer program become very highly amplified, and any strengths it has become much weakened. If it could still beat a top player every time, then with very high probability there are no weaknesses to exploit in it and consequently it plays perfectly. (NB: My discussion in this paragraph oversimplifies the discussion on the FOM list for the sake of simplicity and brevity, but take a look at the original thread if you’re interested in more.)

This leads to all sorts of interesting questions about the nature of knowledge. At the one end, we have human knowledge based on our perceptions of the world and our intuitive understanding of things. Such knowledge is obviously very fallible. On the other end, we have fully rigorous mathematical proof (which is not what mathematicians do yet, but that’s another story). In between, there is scientific knowledge which is inherently fallible but forms part of a self-correcting process. Scientific knowledge always get better, but is always potentially wrong. More recently, we have probabilistic knowledge, where we know that something is mathematically true with a very high probability. Interactive proof systems are an example of this.

The argument above about backtracking in chess suggests a new sort of knowledge based on the interaction between human intuition and computational and mathematical knowledge. These newer forms of knowledge and the arguments based on them are very much in accord with my view of the pragmatic nature of knowledge. My feeling is that interactions such as this between computational, non-intuitive knowledge, and human intuitive understanding will be very important in the medium term, between now and when artificial intelligence that is superior to our own both computationally and intuitively becomes a reality (which is I feel only a matter of time, but might not be in my lifetime). At the moment, these new forms of knowledge are really only being explored by mathematicians and computer scientists because the human side is not yet well enough understood. It will be interesting to see how this interaction between human and computational/mathematical understanding will develop as our understanding of how the human brain works improves.

If you found this entry was interesting, you might also be interested in two unfinished essays I have on epistemology without truth and the philosophy of mathematics.