Filed under: Politics | Tags: assange, conservatism, denialism, global warming, intellectuals, propaganda, public relations, rape, smear campaigns, smoking and lung cancer, tobacco industry
A problem for progressives is that intellectuals are more (small-c) conservative than you might expect. The reason for this is that intellectuals are frightened of making mistakes and being caught making mistakes. This comes about because of their engagement in intellectual arguments and investment in their outcomes. Combine this fear of making mistakes with an understanding that they do not have sufficient facts to make an accurate determination of the truth in most cases, and the effect is that intellectuals will often not take a stand on important issues. The problem with this is that it leaves the field clear for those who will take a stand, or leaves the decision to those who make decisions by default, i.e. the powerful. In other words, the intellectual refusal to take a stand on issues where they are not certain amounts to a de facto stand in favour of established authorities and ways of doing things.
The danger of this passivity in response to issues that are too difficult to be sure of, is that it leaves you open to a form of propaganda or public relations strategy that has been well known and exploited for a long time.
‘‘Doubt is our product,’’ proclaimed an internal tobacco industry document in 1969. ‘‘Spread doubt over strong scientific evidence and the public won’t know what to believe.’’
It is very easy to spread doubt, and often extremely difficult to prove a positive claim beyond all possible doubt.
The counter-argument says that not taking a stand is reasonable because we don’t know, and if we don’t know we shouldn’t take a stand. This seems reasonable, but in the case of issues where a political decision needs to be taken, we have to think about the effects of both possible positions we could take. On the one hand, if we refuse to take a stand we’ll never be subject to the criticism that we expressed a wrong opinion, but we’ll be letting the political decision be taken by those with established authority or power. On the other hand, if we take a stand we might be able to have some input into the political decisions, but we run the risk of being proven wrong at some point in the future.
Being proven wrong, though, is not in itself a political problem, it’s a personal problem. So, thinking of our actions as political, rather than personal, we shouldn’t worry about the upset it will cause to us if we turn out to be wrong. Rather, we should only refuse to take a stand if by recklessly taking a stand without sufficient information we might increase the chance of a harmful political decision. Going further than this, in most cases there is not a neutral position: the lack of a position being equivalent to a position in favour of the status quo. Let’s take a look at some examples.
As quoted above, the tobacco industry’s tactic was to create doubt about the link between smoking and cancer. Although the overwhelming evidence was supportive of this link, some evidence was found that was not supportive. What about the effects of taking a stand in favour of or against the link? On the one hand, taking no stand means not pushing for regulation of the tobacco industry, and allowing them to do whatever they want, at the risk of many more cases of lung cancer. On the other hand, taking a stand would mean pushing for such regulation, and reducing the number of people who smoke, at the cost of the profits of the tobacco industry, for no reason. Given that the evidence was very much supportive of the link, the cost-benefit analysis is clear: one should take a stand in support of this link.
A similar tactic is being used today by those who want to deny a link between human industrial activity and global warming. Again, the overwhelming majority of evidence is in favour of the link, but some evidence goes against it. Not taking a stand means letting things go on as they are at the moment, and putting the entire world at risk. Taking a stand, however, is also costly: it means potentially cutting back on emissions and consequently on economic growth, which also has an effect on millions of lives. In both the cases above, there is no neutral position: you have to take a position, and either position you take has potentially dire consequences if you’re wrong. In this case, not taking a position means taking a position in favour of the status quo, i.e. gambling that there is no link and that there will be no effect.
There are many more examples of this situation that crop up all the time. I have come across it in debates with people about industrial action, about alternatives to capitalism, and many more. I want to finish though with an example that is playing out right now and is causing a great deal of tension within the left: the case of the rape allegations against Julian Assange.
History has shown, including quite recent history, that governments are willing to smear their opponents with allegations of sexual misconduct. Given this, it was no huge surprise when allegations of rape against Julian Assange surfaced after the release of documents embarassing to several governments. On the face of it, it would seem as though this was a prime case for ignoring these allegations. However, there is a problem: there is a huge problem with the downplaying or denial or accusations of rape. And indeed, many articles arguing we should ignore these rape allegations have been of a decidedly misogynist character. The response of some (not all) feminists against such articles is then quite reasonable, but unfortunately I feel misses an important point about the way smears work: mud sticks. Consider, as above, our options on taking a stand in favour of or against Assange. If we take a stand in favour of him, we support those who would release important documents that reveal the way governments behave in secret, but we risk supporting someone who may be a rapist. If we take a stand against him, we show that the left will not stand by those who attempt to take on those in power, we let those in power disarm us by accusing us of rape. Yes, of course it’s the case that he might have done it, but there is no neutral stance here: not supporting him is equivalent to saying that you will not stand by anyone if they are attacked by governments in this way. And that’s true even if turns out he did it. Let’s take the logic a step further: what happens if we support him, and it turns out he did it? Our supporting him is a form of political support. The case against him is a legal one, and will hopefully proceed based on the quality of the evidence, regardless of our political support of him. If he did it, and the evidence is sufficient, he’ll be found guilty. We’ll feel bad for supporting him, but in terms of justice no harm will have been done. On the other hand, if we don’t support him, even if he is subsequently shown to be innocent, the damage will have been done.
Given the history of sexual smears in the past then, we surely must support Assange in this case. And it’s important to say that this is the attitude to take not based on looking at the smattering of details of the case that have been leaked, it’s not to accuse the women involved of being CIA agents (it’s possible that they were, but even if the allegations are part of a smear campaign it doesn’t follow that they have acted in bad faith, and we don’t need to take a position on that one way or the other), and it’s not to have an opinion on what Sweden’s sex crime laws should be.
Finally, the argument of this post, that intellectuals are more prone to propaganda because they are afraid to hold a position that might be wrong, should be compared to an earlier article in which I argue that intellectuals are prone to propaganda because they think they understand things better than they do, and oversimplify them. On the face of it, that looks contradictory but I think it’s not: they are two types of error that are not in conflict. On the one hand, we can make the mistake of overconfidence in what we think we know and our understanding of it, and on the other hand we can refuse to take a political stand in cases where we can be proved wrong. These can coexist in that different people can make these two different types of mistake, and that a single person can make the two different types of mistake in different situations. It’s even possible to make both mistakes simultaneously, for example someone who reads and understands a climate change denial article that makes a valid point which they understand, and then declares that there is uncertainty about global warming. They are both making a mistake of overconfidence in their understanding of global warming (they’ve only read the one article and don’t have the expertise or the wide scope to weigh the evidence against all the other evidence), and refusing to take a political position one way or the other on it and are therefore implicitly supporting the status quo case.
Some pictures from my lunch at Pierre Gagnaire with Alastair.
We start with our menu and aperitifs, a selection of fruit juices: grape and ginger, lemon and apple.
Our first selection of amuses bouches:
- Potato and seaweed
- Almond and ginger biscuit
- Anchovy ‘salad’
- Tuna and blackberry macaron
- Black bread stick with olive oil and thyme
Our second selection of amuses bouches:
- Beef carpaccio with salmon eggs
- Cheese stick with wasabi
- Bacon, cheese and hazelnut
- “Palourde Belle-Ile” – clam on a bed of octopus salsa
- “Merlan brillant recouvert d’un crumble vert; moelle de boeuf, pop corn” – whiting with a green herb crumble, beef bone marrow and pop corn
- “Royale iodée au jabugo” – mussels wrapped in jabugo ham with iodized custard
- “Soupe de pied et oreilles de cochon, roquette et lentilles” – soup of foot and ear of pig with rocket and lentils
- “Sablé de figue sèche au Moscato d’Asti, raisin liqueur” – dried fig biscuit with booze
Our fish course:
- “Effeuillée de lieu jaune de ligne déposée sur une marinière de coques agrémentée de poireaux, d’énokis et de haddock” – line caught pollack on chlorophyll jelly on clams, leeks, enoki and haddock
- “Grenouilles et quasi de veau façon Poulette, gâteau de foie blond; laitue farcie d’ail noir. Chouquette de parmesan, salade d’automne” – Frog and veal fillet cooked “like chicken”, yellow liver cake, lettuce stuffed with black garlic, puff pastry of parmesan, autumnal salad
- Fig compôte with caramel ice cream and sugar tuille
Second set of desserts (not shown):
- Ginger biscuit with marinated orange pieces
- Poached pear with pear ice cream and hazelnuts
Third set of desserts:
- Something with apple, spiced tuille and bits
- Chocolate tower filled with tiramisu, pistachio cream, vanilla cream, and spoon of posh nutella
- White truffle bon bon
- White chocolate and orange
- Fruity sort of thing
- Spongey thing
- Chocolate and caramel
- Red fruit pastille
And we finished with chocolate, coffee and a tisane for me.
Total price: 130 euros each. Worth every penny.
Alastair in the restaurant:
Filed under: Academia, Epistemology, Manifesto, Philosophy, Science | Tags: peer review, scientific method
You often hear people responding to the claims of homeopathists and the like with a request to see the peer-reviewed literature supporting their claims. The idea is that peer review is a unique characteristic of proper science that is essential to make science work. I recently read a history of peer review in science (subscription to Trends in biotechnology is unfortunately required) that throws some doubt on this. Fascinatingly, peer review didn’t really take off until 1959 with the invention of the photocopier: before then, it was just too expensive to make copies of the papers to send out for peer review. There was something like peer review before then, in that the editor would ask the opinions of friends and colleagues, but this was far from the standard we have for peer review today. A few journals would make the effort to make copies of papers and send them out for review, but it was rare.
Since science before 1959 was, in fact, extremely successful, and able to develop very well, it cannot be the case that peer review is an essential component of science. That’s not to say we should do away with it: it serves an important function today, essentially that of reducing the workload of the editor. So many papers are submitted for publication that some method for choosing which papers should and shouldn’t be published is necessary, and the reviews help the editor make that decision. What the exact function of peer review today ought to be, and how it should evolve, was the subject of an interesting debate in Nature in 2006 (no subscription needed for this one, I think).
So the people demanding peer-reviewed papers from the homeopathists should perhaps question what it is exactly that they’re doing. My guess is that they use it as an easy, but wrong way to distinguish science from pseudoscience. It makes for a good quip. The danger, though, as argued in this article, is that in fact it’s easy to cherry-pick bad ‘scientific’ papers that have passed the peer-review process. By using peer review as the ‘gold standard’ of scientific enquiry, they allow the justification of all sorts of nonsense.
This poses a problem though: if it’s not peer-review, what is it that makes science work? There have been many theories about this, most focussing on methodological aspects of scientific enquiry, for example Popper’s falsifiability criterion. I want to suggest an alternative point of view that doesn’t focus on methodology:
Science is the study of problems that can be addressed with the techniques available to us.
Techniques here should be understood to include both technology and, for example, mathematical or statistical tools. In this view then, physics is the ‘hardest’ science because the problems it looks at are the easiest. The variables in physical problems are much less interdependent, there are many fewer of them, and their interactions are much simpler than those in, for example, the study of social systems or economics. Our simple, mostly linear mathematical techniques and statistical methods based on independent variables then apply very nicely to physical problems.
Why does science work in this view then? The answer is that science is part of a social process: scientists do science, they are genuinely interested in finding out what happens, and even though individual scientists will sometimes be very wrong, will try to promote a bad theory that doesn’t have much support, or will attack a rival theory, eventually the egotistical motives involved will fade whereas the usefulness of good theories will last. This process may take a generation before change happens, but it eventually works. This process relies on there being at least some bias in favour of good theories over bad ones, even if the bias is small in comparison to the egotistic biases in favour of established theories. However, if available techniques are not good enough to introduce this bias, the good theories won’t win out over the bad ones, even in the long run. Thus, in unscientific subjects we shouldn’t be surprised to see cyclical variations in theories, where the period of the cycle is related to the duration of a scientific career. Young researchers will look at the work of the old researchers with disdain and propose radical alternatives, to which they will become attached. In turn, their theories will be rejected by those that come after them, and so on without ever stabilising.
In a way this is obvious: if there is no evidence to support or reject theories, then we cannot improve them. But it’s important to note that whether there is or is not any evidence to support or reject theories is to a large extent a function of how easy the problem being studied is. The fact that evidence in physics is so much stronger than it is in economics is hardly a reason for scorn of economists.
This point of view on why science works may help scientists look at researchers in other fields both more humbly and more reasonably. Humbly, in that they should realise that the reason their field of study is so advanced is that the problems are so easy, and reasonably, in that it should help them to see more accurately why it is that some fields make better progress than others, without getting distracted by methodology.
Filed under: Manifesto, Philosophy, Politics, Religion | Tags: belief, cognitive dissonance, confabulation, god, integration propaganda, intentional stance
A couple of years ago I wrote a post cheekily entitled Nobody believes in God which, following some recent conversations with friends, I want to revisit. This will actually make the third time I’ve gone back to this idea (see the first and second). I think this is because there’s an important kernel of a good idea there, but also much that is unclear.
Unfortunately, and perhaps to some tediously, the biggest problem is that the term “belief” is not well-defined. In my original article I wrote that “To believe something, you have to act in a way that is consistent with the belief being true. Otherwise, you’re just saying that you believe it.” This is an important sense of the term “belief”, somewhat akin to Daniel Dennett’s notion of the “intentional stance”, but it became clear that this isn’t a definition most people would agree with, and is problematic. It’s not necessarily detrimental that most people wouldn’t agree with it, but there are other problems. Notably, it assumes a certain consistency and unitary quality that people don’t have. All of us have beliefs that are inconsistent, and we wouldn’t want to say we don’t have beliefs just because of that. Similarly, all of us take actions that are inconsistent with our beliefs. Imagine that someone tells you that if you move your leg they will kill you, and then they tap you just below your knee with a hammer – you will, despite your sincere belief in what they say, move your leg. It’s not possible for you not to – the signal never even reaches your brain but is instead turned into a motor command by your spinal column. This is a trivial example, but suggests a general point: we do not have a single identity that we could ascribe our beliefs to, we have multiple systems in our brain that are sometimes under conscious control and sometimes not, etc. Ramachandran describes the case of a split brain patient who had one half of the brain which believed in God, and one that didn’t.
So using an individuals actions as a guide to their beliefs cannot give us the full story on belief, but that still leaves us in a quandary: what can we go on? Since people lie, we obviously can’t uncritically go on what they say. So if we can’t go on what they say or what they do, what is left? The only option remaining, it seems, is to give up the idea of a single, unitary thing called “belief” that people have, and admit that belief is a vaguely defined, context-specific thing that can, depending on specific circumstances, violate almost every single intuition we might have about it. Again there is a risk of throwing out the baby with the bathwater, and giving up usage of the term completely. No, clearly the idea of belief has some use, but we have to be careful about saying what type of belief we’re talking about, and when reasoning about statements involving “belief” be aware of the practical consequences of the different types of belief in different contexts. The next task is to unpick some of those types of belief and how they affect our actions. In the remainder of this entry I’ll discuss some of the types of belief that have been suggested to me in recent discussions that are relevant to religious/theistic belief.
The first conception of belief I’ll talk about is my original one, broadly corresponding to Dennett’s intentional stance, that someone believes something if they act consistently with it being true. For example, the shortest way from my office to the street is through the window, but I take the more laborious route via the stairs because I believe in gravity. There’s no doubt this is a useful conception of belief that has practical consequences. And, I stand by my original contention that most religious people don’t believe in God in this sense.
There is a complication though, which is that in this sense stated as above, I “believe” there is a china teapot orbiting the Sun between Earth and Mars: I never do anything that is inconsistent with there being one. Somehow we want to exclude such beliefs as this, but this quickly becomes problematic if we want to talk about religious beliefs, for obvious reasons. There is a kernel of religious belief that cannot ever come in conflict with our actions. The idea that God “exists” and “created the universe” cannot be in conflict with anything we do. Many people today will, when questioned on their religious beliefs, agree only to this minimal kernel of faith and nothing more, but will vehemently defend the idea that they do believe in God. So do they believe in God or not? Here’s where context comes in handy: we can say of these people that although they may believe in God in some highly rarified symbolic context, in the everyday context of the real world, they have no belief in God. If God didn’t exist, nothing they did would be any different.
One suggestion for defining belief was precisely this: by the term “belief” we mean exactly that component of our thoughts that doesn’t come into conflict with reality. If it could be proved (or disproved) we wouldn’t have to believe it. This corresponds nicely with the notion of faith developed in Douglas Adams’ “The Hitchhiker’s Guide to the Galaxy” where the Babel fish causes God to cease to exist by proving his existence and thereby eliminating faith. I feel that this definition does indeed correspond to some aspect of what we ordinarily mean by the term “belief”, and it has some resonance with the Christian relationship with “doubt”, but I’m not sure there are many theists who would be happy with a definition of belief as that which is inconsequential.
The second conception of belief that came up was several ideas around a common theme: the idea that belief is a form of post hoc rationalisation of our actions. This is related to the notion of confabulation in psychology. This is easily shown in an experiment with split brain patients where you tell one half of their brain to go and get a coffee (by whispering in one ear so that the other ear can’t hear it), and then you ask the other half of the brain why they went to get a coffee: they will typically reply that they were thirsty, or felt like a coffee, or something like that. In other words, having observed their own actions, they create a rationalisation for them. With probably a minimum of reflection, we can see that this isn’t just a peculiarity of split brain patients: we all do this from time to time.
Jacques Ellul used this notion of belief following action as an explanation for an important part of the power of propaganda, what he called “integration propaganda” (which I discuss in the last paragraph of this entry). Essentially, if you can get people to engage in certain actions, especially ones with a powerful emotion attached to them, then they will tend to create their own beliefs for why this was the right thing to do. The alternative to believing that it was right is believing that it was wrong, or meaningless, and most people don’t want to do this (cognitive dissonance). This form of belief applies very nicely to religion, in which you are first enculturated into various patterns of behaviour (going to church, prayer, etc.) and are thereby integrated into the social community of the Church. There is a strong incentive later in life not to admit, even to yourself, that you don’t believe in God, because to admit that would be to admit that all those actions were meaningless, or even worse, actually wrong.
So what type of belief is this? At first glance, it is a purely symbolic form of belief: you will say “I believe in God” if asked. Does the belief have consequences though, and in what contexts? It’s difficult to distinguish, in this case, because a belief such as this came with a period of social integration, which, especially in the case of people raised as religious, will have defined their whole moral outlook. Which consequences of this process are due to the belief, and which to the social process of creating the belief? It was put to me that religious people often do extraordinary things, such as not having sex before marriage, hating gay people, reading the bible, and praying, and that shouldn’t we take this as evidence that they believe? I countered that we can equally well explain those as consequences of their being brought up and part of a religion which encourages these sorts of behaviour (think of the extraordinary things that people did and thought in atheist Soviet Russia). I don’t think either of us was satisfied with this answer, it’s too difficult to pick apart why people did those things.
The question then, to me, is what explanatory power does the idea of “belief in God” have beyond membership of a religion? My argument is that it has very little power to explain behaviour because social pressures are able to explain so much. Twisting Laplace, I have no need of the hypothesis that people believe in God. You can say that I’m trying to have my cake and eat it – or rather, eat my cake and have it, as the expression should be in order to make sense – on the one hand I want to say that belief in God doesn’t lead to any actions, but when faced with an action that does appear to stem from a belief in God I want to write it off as the consequence of religious enculturation. How could I be proved wrong? But the defender of the idea that people really do believe in God has the same problem: when a religious person does something that isn’t consistent with their beliefs, they want to say that it’s because they are inconsistent, or weak, or because their belief doesn’t really apply in that case. I will put forward a few reasons, that may not be conclusive, to suggest that the social process is more important to explaining behaviour than the belief:
- Historically, behaviour by religious people is very inconsistent. In the past, and still very frequently today, Christians hated gay people, abhorred them as sinful, sometimes citing some of the nastier bits of the Bible in favour of this. These days, more liberal Christians will cite rather sophisticated theological reasoning to explain away the passages in the Bible that refer to killing gay people (apparently, Jesus “fulfilled the law” so we don’t have to follow it – if you can make any headway on that good luck to you). In any case, if both of these types of Christian “believe in God” in the same sense, then this belief clearly isn’t reflected in their attitude towards gay people. “Belief in God”, in other words, has no explanatory power with respect to attitudes towards homosexuality. We can play the same game with any number of attitudes and behaviours. What is left?
- In my original entry, I argued that if you really believed in hell, you would never sin, and yet people often do. It has been countered, innumerable times, that these days most Christians either don’t believe in hell, or don’t believe that they’ll be going there. But, this is sophisticated theological reasoning, and I’m pretty sure that throughout most of the history of Christianity, the masses certainly did believe in hell, and thought that they might well go there. And yet, they sinned. Is our “belief in God” so different to theirs? Could it be that they didn’t believe in God, but that Christians today do? Perhaps, although that makes belief in God a very recent phenomenon, which seems somewhat counterintuitive. I would rather say that the belief (or lack of it) in God is the same, but that there has been an innovation in the language game of belief, coming about as a response to the increased interaction of religious people with atheists. Since the belief in God is the same we can apply my arguments about belief in hell to people a couple of hundred years ago and find that they don’t believe in God (beyond their membership of the religion in question), and that consequently, neither do people today.
Although I may be able to explain the actions of people raised as religious without using the idea that they believe in God, and therefore show that the post hoc rationalisation form of belief exists only at the symbolic level, there is still a problem with saying that there is no such thing as consequential belief in God. The problem is: what about people that come to believe in God later in life, perhaps having been raised as atheist? They didn’t experience a correlative moral training by theists, and so their actions, such as they are, must be explained by the belief in God. And since these actions are often similar to those of the people brought up as religious, shouldn’t we interpret both as stemming from their belief in God? This is certainly a conundrum, and perhaps even an antinomy, but it applies equally to those who believe that people believe in God (see bullet points above). I suspect that people who have converted later in life are often those who have experienced some sort of psychological trauma, and want the safety and acceptance of a social group, and perform the actions because they are an entry requirement to the social group. If anything, we would expect them to be more extreme in their performance of the rituals, because they have more to prove: they are trying to come in from outside, rather than being allowed to stay in by default of having grown up inside. This does seem rather supercilious though, and I have little to no evidence that it’s the case.
To finish with, I want to outline a position similar to my first followup to the original article, which is also similar to a suggestion made to me recently, and that at least partly resolves some of these difficulties. The position is this: we hypothesise that we think at a symbolic and non-symbolic level, and that there are interactions between these levels, but that they are not tightly integrated. The reality is undoubtedly more complex than this, but perhaps this hypothesis can shed some light anyway. Belief in God then impacts only directly on our thinking at the symbolic level. Typically, this symbolic belief in God is instilled by integration into a community (although it can also occur through other means). Normally, our actions are guided by pragmatic reasoning, or by routine, and these actions can be correlative with our thinking at the symbolic level having both been influenced by our upbringing.
The suggestion that was put to me was this: Infrequently, a decision presents itself which cannot be decided by either pragmatic reasoning or by routine. For example, what do we think about stem cell research? Stem cells are sufficiently far removed from anything we have ever thought about before that neither pragmatic reasoning nor routine thinking can enable us to come up with an answer to this. It is in these unusual instances that our belief in God can have an effect, because we are forced to pass from our beliefs to our actions rather than the other way around. My response to this though is that belief in God doesn’t have strong explanatory power here: the situation is really that the stated set of (self-contradictory) principles that someone believes in have to be resolved in a particular case by making a choice one way or the other. The fact that these decisions can sometimes go one way, and sometimes another (for example, in religious schisms that come about in these situations) shows that the resolution is to some extent arbitrary, or at least decided by other factors.
In conclusion then, I prefer to look at religion as a purely social phenomenon, which happens to include the notion of God, but that could equally well be replaced with any other purely symbolic object and serve all the same purposes. Everything that I’m interested in as regards religion can be understood in this way: we can see how it is that religion is capable of both great evil, and great good, just as any other form of power can be exercised in ways that are harmful or beneficial. In fact, we can treat religion purely politically, and I think if we do this we’ll see that it’s harmful and beneficial precisely where political systems are harmful or beneficial. Harmful when hierarchical, centralised, authoritarian, and beneficial when democratic and driven by ordinary human needs and desires. See also my old manifesto entry on religion, and followup on religion, atheists and hierarchy.
Why is this important? Because it changes the stance that atheists should take towards theists. For a start, we should probably not expend much energy in trying to undermine the idea that God exists because this is probably not primary for most religious people. It may have some use as a focal point for atheistic communities, may serve as a hook on which to hang ones atheism, and may help to give the final push to people who have already decided to give up their religion, but is unlikely to be a major force in converting people. More important than this rhetorical or didactic reason for not focussing on God, we shouldn’t do so because belief in God is largely inconsequential in comparison to religion as a political phenomenon. If the atheist community could apply their considerable talents to political analysis, perhaps really great things could be achieved. As it stands, atheists are as likely as not to be politically reactionary and supportive of authoritarian and hierarchical structures.
Filed under: Politics | Tags: poor laws, tory spending cuts, welfare reform, workhouses
In 1834, responding to the increasing cost of poor relief which was considered to be unmanageable, the Poor Law Commission recommended that those receiving help should be “subjected to such courses of labour and discipline as will repel the indolent and vicious” (quoted in A. L. Morton “A People’s History of England”). Consequently, the workhouses were run according to the principle of less eligibility, that as a form of deterrent, the conditions in them should be worse than was possible outside (which was certainly the case in the Andover workhouse, in which the inmates had to suck the marrow out of bones they were supposed to be grinding for fertiliser). Furthermore, that someone “requesting to be rescued from that danger out of the property of others, he must accept assistance on the terms, whatever they may be“ (see p24 here). This was the system described by Dickens in Oliver Twist, and it led to mass agitation, partly through the Chartist movement, including the storming and burning of the workhouses.
On reading about this recently, I was reminded of George Osborne’s recent comments, that “The welfare system is broken. We have to accept that the welfare bill has got completely out of control… People who think it is a lifestyle to sit on out-of-work benefits … that lifestyle choice is going to come to an end.” Just as it was in the 19th century, however, the rhetoric is quite out of touch with reality.
To those of you who somehow have managed to not come across Slavoz Žižek, Slovenian psychoanalytic philosopher / cultural theorist, here’s a nice, short animation that does the basics. If you haven’t read him or seen him talk before, you should really take a look (and it’s quite fun even if you have).
I haven’t been blogging much for quite a long time, mostly because I haven’t had the time to write any long, carefully argued posts. So as not to let this blog become totally moribund, I thought I might post a few shorter, incomplete entries. So, let’s kick off with a video someone sent me last night giving a very brief (10 minute) statement of “lack of belief” as a definition of atheism. It’s sort of unarguable in some sense. (Or is it just that it says things I agree with?)