Filed under: Ethics, Morality, Philosophy, Politics | Tags: arationality, fascism, feelings, guilt, integration propaganda, psychology
I got the following criticism of my previous entry on arationality and honesty. Consider the case of car drivers. On any individual drive, there is a very small chance of causing a death that is not their fault (there’s also a much larger chance of causing a death that is their fault, but let’s leave that aside). Now, according to my theory, how is a car driver supposed to behave? In the two cases where (a) their drive doesn’t end up killing anyone, and (b) it does end up killing someone. The criticism is that the theory appears to say that in case (b) the person should feel bad and personally responsible, even though it was only random that it happened to them rather than someone else.
I have various responses to this. The first thing is to point out that the theory doesn’t tell people how they should feel, it speaks more to how they should act and more importantly how they should integrate actions and their consequences into their ongoing thinking. You can’t exert the same sort of conscious control over how you feel about something as you can over how you act and think about something. (That said, you can exert a less direct form of control over feelings and I’ll get back to that at the end of this entry.)
With that in mind, let’s first consider the decision before the drive: whether or not to take the drive knowing that there is a small chance of killing someone. It seems that the decision here is a straight up cost benefit analysis of whether the importance of making this journey by car rather than another form of transport outweighs the cost of killing someone multiplied by the probability of it happening. It’s already unlikely that most people think about it as clearly as this. More likely, they just think “it’s very unlikely” and drive whenever they want to. But even this level of analysis misses the full picture. As well as the external consequences of the action, you also have to bear in mind the internal consequences. You know that if you kill someone with your car you will feel guilty about it, even if it is not your fault. Most likely, that guilt will live with you the rest of your life. So there is a selfish component to the cost benefit analysis as well, which is to take account of how it will make you feel in both cases. Alright so where does this get us? Well, it is a demonstration of the method of not modelling ourselves as rational agents. Our own emotional reactions are a particular case (and not one I wanted to focus on in the previous entry) of our arational cores. We can understand these, and take them into account in our actions. Incidentally, doing so doesn’t make us any less human; we would still feel those emotions we would just better take account of them in our planning.
The next decision to analyse is the decision about what to do in the case that you have just accidentally killed someone with your car. Your feelings about it are a given (although see the last bit of this entry): guilt. The question is: what should your actions be and how do you integrate this into your ongoing thinking about the world? One reaction would be to reorganise your way of thinking so that the unavoidable feelings of guilt you would have would be suppressed or at least not contribute to your ongoing thinking. Another reaction would be to become permanently depressed about it. The former makes it possible to go on living your life relatively normally, at least outwardly, but would most likely completely change the nature of the way you relate to the world. For example, you might reorganise your thinking so that guilt feelings generally were suppressed. But what would the consequences of that be? The latter reaction, on the other hand, makes it difficult to go on living which doesn’t appear to be a valuable thing to do for anyone. A better option would perhaps be to accept the feelings you have but channel them into a positive activity (like becoming a road safety campaigner, or public transport proponent). This option can potentially be one that doesn’t involve becoming self-delusional, but does allow you to continue living your life. It’s not delusional because you accept responsibility for the consequences and you know that you are doing the campaigning work as a way to assuage your guilt, but it still helps you to go on living and has positive social effects.
The example of the car driver came up in a conversation about fascism and integration propaganda. I was arguing that if people take responsibility for their actions more, fascism couldn’t have happened. In other words, if people didn’t allow themselves to excuse themselves from fighting fascism for various reasons, and accepted responsibility for the consequences of it, it wouldn’t have happened because dictatorships require passive consent to continue functioning. Like the case of the car driver, there are three options when a fascist or dictatorial state is taking power: go along with it and rationalise it as the right thing to do; become depressed and inactive (which is essentially going along with it too); fight it, at possibly great cost to yourself. These are somewhat analogous to the three options the car driver above who has accidentally killed someone faces, and like in that case, the third option is the best.
So that about sums up my response to this criticism. I want to finish by saying a few words about psychology. I’ve made some pretty strong assumptions in this entry about the ways in which we can or can’t exert control over ourselves. I’ve assumed that we can exert control over the way we act, the way we think, and the way we integrate knowledge and events into our ongoing thinking, but that we can’t exert control over the way we feel. None of these assumptions are likely to be completely correct. For example, an addict cannot control their actions (or at least, that is one way of looking at it). And we know that we can affect the way we feel about things (although this process can take many years). Worse, in my previous entry I was advocating a theory that said we can’t make use of this sort of grammatical construct – “we can control the way we think” – because the “we” is the same object and so the verb “control” doesn’t have the usual semantics here. Indeed so, but despite that it feels as though by reading about and trying to understand the world, our modes of thought can be affected by ideas, and these ideas have greater or lesser effect when directed at certain parts of our thinking rather than others. An idea that tries to change the way we feel about things, it seems, is less likely to be effective than an idea that tries to change the way we reason about things at a more conscious level. This consideration may provide a way out of that difficulty.
Despite it being based on assumptions which are likely false, it still seems as though they might be true enough that the conclusions derived from them might be useful. Whether or not that is so is an empirical question, but at least to me it feels like there is something there worth considering.
Now I’ll end on a question.
If there were a pill you could take that would mean you would never again do a bad thing, would you take it?
Filed under: Epistemology, Ethics, Manifesto, Morality, Philosophy, Politics, Religion | Tags: arationality, cognitive dissonance, coherency, consistency, decisions, determinism, evolution, free will, identity, incoherency, inconsistency, instrumental rationality, integration propaganda, jacques ellul, limits of reason, meat machines, propaganda, rationality, responsibility, truth
Perfect rationality is impossible, and the limit of the scope of the concept of rationality is important. I start from an observation that many would not agree with, that there is no such thing as truth (which I’ve argued elsewhere to some extent). It’s just a heuristic concept that helps us to function in everyday situations. Dropping the notion of truth involves us in some considerable difficulties, which I discuss in the previous link, but these difficulties are not insurmountable. It is possible to have a useful conception of epistemology free from the notion of truth. In this entry, I criticise the idea that we can be ultimately rational, and look at the consequences of taking this seriously for ethics and morality.
Epistemology in some sense is a specific form of rationality, it concerns only thoughts and ideas whereas rationality is supposed to also encompass actions. An action can certainly be instrumentally rational. Someone is thirsty, so they pick up a glass and turn on the tap and drink – this is instrumentally rational, rational with respect to a given set of goals which are not in themselves analysed. But actions cannot be rational in and of themselves, they must be relative to a set of ends, and ultimately these ends cannot be described in terms of rationality. In consequence, people cannot be rational (ultimately). One consequence is that Vulcans couldn’t exist – you cannot act by logic alone. I call the aspects of our behaviour that cannot be analysed in terms of rationality, arational. This is in distinction to irrationality, which is about doing the opposite of what rationality dictates. Examples of arationality abound: emotions, tastes, etc. But also at the boundary, things like the fact that we keep breathing rather than just stopping.
So can we analyse the arational? To a certain extent yes, we can say more about it than nothing, but there are no complete answers. Later, this leads on to the ethical concepts of honesty and responsibility which I believe are related to arationality.
To start with, let’s take the trivial observation that we humans are nothing so special. We’re essentially “meat machines”, machines built by our genes to replicate themselves (this too is a simplification, but bear with me). We’re built on a physical substrate subject to physical laws. It’s surprising that we can do anything like thinking at all. It’s instructive to think about the extent to which we could call the behaviour of other animals as rational. Is a dog rational? What about an amoeba?
So given that, rather than talk using high level concepts like “reasons” (X believed Y and so took action Z) that presumably are supposed to be understood in some undefined way as related to the internal state of the central nervous system, I’m just going to talk about decisions which can be analysed externally. We can say in some way unambiguously that individual X took action Z, they made a decision to take that action. The decision need not be conscious, remember we’re not talking about internal states here. So we repeatedly make the “decision” to breathe, just as the amoeba makes “decisions” to extend its pseudopodia, or what have you.
Now this way of looking at things helps us to see what we can or can’t say about arationality. To some extent it can be analysed. Obviously we mostly keep choosing to breath because we would be unsuccessful meat machines if we didn’t, and so our genes wouldn’t be replicated. This isn’t to say that we must do these things, just that you would expect to see that most individuals would make these sorts of decisions most of the time because they’re meat machines formed from recombinations of genetic material that tended to act in this way (there are some assumptions there, but that’s another story). Evolution also gives us a point of view on when we can’t analyse arationality. A new individual, either because of a particular recombination of genetic material or because of a mutation, exhibits a new type of behaviour. This happened for reasons we can understand (maybe just chance), but until the individuals interactions with the environment determine the success or otherwise of the individual in reproducing, we can’t say whether it was a good behaviour or not (from the point of view of the genes). Until that point, the behaviour just is a behaviour, and the individual is just an individual that exhibits that behaviour. What more can be said before the success of the behaviour is tested in the world? In conclusion, looking at arationality in terms of behaviours, we can obviously analyse much of arationality in a scientific way, but ultimately in certain cases all we can say is that such-and-such is the behaviour exhibited by such-and-such individual.
At this point I want to bring in the moral and ethical aspects. We like to think of morality and ethics as being about right and wrong, but just as there is no truth, and just as rationality is not entirely straightforward, there is no such thing as right and wrong. There are only decisions. There are decisions individuals make for themselves, and decisions that a political entity makes for others (social mores, codes of conduct, rules, punishment, etc.). A poor person steals from a rich person, the rich person is so rich they never notice they’ve had something stolen. Has a wrong been done?
Rather than talk about this from the point of view of whether or not the poor person made the right decision, I want to just talk about the types of decisions that have been made here, by whom, and what considerations bear on them. First of all, the thief has made a decision to break the rules. Secondly, the society has made a decision to punish people who are caught breaking the rules. We wouldn’t like to say the thief did wrong because nobody was hurt by the action, and the thief’s life was made better as a consequence. On the other hand, that doesn’t mean the decision was necessarily right because if it was right then surely the society would be wrong to make the decision to punish people who are caught. It’s clear that talking about this case in terms of right and wrong is a surefire way to end in confusion. Instead it’s a calculus. The society makes its choice to punish thieves because if they didn’t – they believe – there would be a breakdown of order. The thief makes the decision to break the rules knowing the decision of society, and must take responsibility for this action. If they are caught, they will face punishment, if not then they won’t.
Suppose now that the thief was a rich person stealing from a poor person. The analysis above seems unchanged, and indeed it is. One thing that may change is that the society may choose to allocate its resources differently towards catching the one or the other sort of thief. For example, a society may decide to put more resources into catching poor thieves stealing from the rich than rich thieves stealing from the poor, or it may do it the other way round. That’s politics. In my view, society today tends more towards the former whereas it ought to tend more toward the latter, and I make political decisions based on that. These are my decisions, which are ultimately arational. I could put forward reasons for this view, but those are ultimately judged on arational criteria. Others may differ.
Equating the identity of an individual with the actions they decide to take in the circumstances in which they find themselves gives us a useful way of looking at two problems: free will, and morality. There is a classical problem which is that free will cannot be consistent with determinism (if an action was determined by physical laws it cannot have been freely chosen because it couldn’t have been otherwise). There is an extension that says that all decisions must either be determined or random. It goes like this. If an action wasn’t determined by physical laws, then it would effectively meet the physical definition of randomness. In exactly the same circumstances (including the experiences, desires, preferences, state of mind, etc. of the individual concerned at the time of making the decision) you could have different outcomes, making the decision effectively meaningless (not dependent on anything at all), or random in short. However if we identify an individual with the decisions they make it doesn’t matter whether they are determined (or random), they are still the decisions of that individual (it is just that the identity of the individual is also determined). A forced (unfree) choice would be one that no individual in the same circumstances could have made differently (e.g. you cannot choose to ignore the force of gravity).
This last point has a moral and ethical component. If we accept all the choices we actually make are not forced in this sense, then we have to take greater responsibility for them. An ugly choice that we were put under great psychological pressure to make is still our own choice because it is we ourselves who are choosing to respond to that psychological pressure. It is not an external thing acting on us in the same way that gravity is. Even if we were offered the choice between one option which would lead to our death, and another option, it’s still a free choice because we are free to choose how to weight the significance of our own death. Evolutionary processes explain why so many people will weight the significance of their own death so highly, but since the identity of the individuals themselves is the output of that process, we cannot consider that process as an external force acting on us.
The general point here is one of taking responsibility for one’s own actions, and being honest about their status as one’s own actions. Often, we attempt to excuse our actions by giving the reasons why we took them, as if these reasons were themselves external forces acting on us which we couldn’t ignore. However, as we have seen, an action itself cannot be rational, it can only be instrumentally rational with respect to an arational core. We rarely think to deeply analyse our own arational cores, but the considerations given here suggest we ought to be more aware of them and identify with them more explicitly. It may be that to do so, we must become more aware of our own logical inconsistency (even incoherency). We are often in the situation, for example, of wanting a thing and also wanting not to want it or even believing that we don’t want it. If we make the mistake of thinking of ourselves as having a core identity that is coherent and consistent in some sense, then we are inevitably led into confusion. It may be this that underlies the phenomenon of cognitive dissonance.
Following this logic through and acting on it is actually a very difficult thing to do. It means really coming to terms with the inconsistency of our very identities (which the word alone suggests the difficulty of). It means realising that much of what we do we do without reason (our arational cores), and that we have made choices that we both like and dislike not by mistake or because external forces acted on us, but because that is our nature. It means taking responsibility for every choice we have made, being honest about them, and analysing ourselves. Self-analysis is unavoidable if we do not have a consistent and coherent unitary core (and can be done by introspection and by looking at our choices and identifying those choices with ourselves). Finally it means living with all that inconsistency.
In particular, it can be very difficult to honestly appraise the inequality of society, our own place in it, and live with that. Most reading this will have been the recipients of more than their fair share of luck and will have benefited disproportionately from the work done by everyone. Born into (relatively) wealthy families, receiving (relatively) good educations, etc. It would be easy to fall into the habit of thinking, as many do, that we deserve what we have because that is an easier idea to live with. Choosing not to engage in this sort of self deception requires us to honestly face up to our arational cores, and the experience may not be pleasant. Why do we not give away all our wealth to those who are in need? If we even asked ourselves the question, we would probably find some reason that explained how it couldn’t be otherwise. Perhaps the prospective recipients of our wealth wouldn’t be able to make correct use of it, perhaps charities are essentially corrupt and wasteful, etc. I don’t want to say that we don’t do this because we are selfish. It is more like the choice to keep breathing, there is no need to find a reason for it, it is just a decision we keep making. The danger in finding a reason why we don’t give all our wealth away to others who need it is that it may stop us from giving any away. If there were a good reason why we shouldn’t give our money away, then presumably we shouldn’t give any away. Similarly, if there were a good reason why we should give our money away, we probably ought to give almost all of it away. If we must act by the one sort of reason or another, then we’re faced with the choice of giving away all or nothing, and most would give nothing in that situation.
This may explain why the poor tend to give more money away than the rich. Suppose the choice were not between all or nothing, but between nothing and everything except the minimum necessary for my own survival and that of my family, dependents, etc. For the rich, this choice would then be between nothing and maintaining their lifestyle, or of changing their lifestyle to one of poverty. For someone who is already poor, it wouldn’t involve any change of lifestyle to give away enough money to leave them poor as they already are. By choosing to live by the idea that we do what we do because we have reason for doing so, we put ourselves in the absurd situation that those for whom it would be easiest to give are least likely to do so. The alternative is to say that the amount we choose to give away is our own choice and is not dictated to us by our reason. To take responsibility for the choice, and not to try to pretend that we are acting by a coherent code that dictates our behaviour.
Knowledge, here particularly self-knowledge, is always better than delusion, even when it hurts. When we allow ourselves to be deluded, things always end worse than when we are clear and honest about what is going on. There are many applications of these ideas: religion in this view is obviously problematic because it attempts to externalise our moral choices (indeed, our wish to externalise them may explain why religions are so prevalent); much ethical and moral philosophy, secular or otherwise, is problematic for the same reason, it supports the pretence that there is a coherency or could be such a thing, which stops people from coming to terms with the lack of it. Finally, I want to focus on just one more example, propaganda.
In a previous entry I talked about Jacques Ellul’s book “Propaganda” and the idea that intellectuals are most subject to propaganda because they want to believe that they understand the world, but lacking sufficient time to really do so they rely on answers provided to them by others (putting them in the power of those others). The other aspect of propaganda is what Ellul calls “integration propaganda”. The idea is that once you have participated in an action, you will rationalise that action and create a justification for why it was the right thing to do. The propagandist only needs to get you to participate in an action and you will do the intellectual reorganisation yourself. This is an aspect of propaganda that most people don’t understand (believing that propaganda is just a way of getting people to believe something by repeatedly saying it, or some other such simplification). This is essentially a form of cognitive dissonance: nobody wants to consider themselves the bad guy. Or in the framework of this essay, people want to think of themselves as coherent and consistent, so if they took an action they must have had reason for doing so. Recognising that we are not consistent, rational beings working to some perhaps unknown moral code then has the potential to free us from integration propaganda. Taking our own arationality and inconsistent as a given, we would no longer feel the same requirement to create a self-justifying rationalisation, and so the propaganda would not have its desired effect.
The difference in my current thinking has to do with the connection between opposing the killing of animals, and being a vegetarian. I don’t think there is much of a connection, in fact. That is, the fact that it is wrong to kill animals does not mean one ought to be a vegetarian. It doesn’t even make vegetarianism a good idea.
I previously wrote something slightly silly about vegetarianism on this blog, but the discussion it led to what was quite interesting. The focus of the discussion was different to the focus in the article linked to above, but in the comments I wrote the following which I think is somewhat relevant to the Marxist take of the article:
I do think though, that at some point in humanity’s future we will all choose (but not be forced by law) to stop eating animals for ethical reasons… The more general issue is about what and who you care about. As our personal circumstances improve (we’re lifted out of poverty for example), we gain the possibility to be compassionate towards others that we didn’t have before.
In a previous entry I posed an ethical question for vegetarians – would it be OK to eat meat if you could grow it without an animal? Well it turns out that people are already doing this. In fact, this man is already doing it:
At the moment, it’s not that appetising, here is a frog steak they made:
It is also, rather expensive:
The only problem was that no one was interested in eating his fish nuggets, perhaps because his tiny goldfish filets matured in something called fetal calf serum.
Matheny estimates that a kilogram of laboratory meat would cost about half a million dollars if it were grown in calf serum.
In order to make faux meat a reality, then, one of the first tasks is to develop an inexpensive ersatz nutrient solution from plants or mushrooms. Maitake mushrooms, for example, have already proved to be a possible alternative.
Some other interesting links:
- From Innovation Watch
- The Guardian got in on the act (incidentally, it’s a nice case of nominative determinism that the Guardian’s science correspondent is called Ian Sample)
A thought strikes me – would it be OK to eat meat if it came from an ‘animal’ without a nervous system (central or otherwise)? This may seem a silly question to ask because all the animals whose meat we eat actually does have a nervous system, but what if our understanding of biochemistry were to improve to the point where we could – say – grow a steak without growing a cow? Or if we could knock out a combination of genes in an animal which produce its nervous system and get an animal to give birth to what is essentially just a meat sack? My feeling is that even a vegetarian would have to agree that the former is acceptable, although possibly not the latter.
The second question that follows on from this is: would it be OK to eat meat from an actual animal if it was possible to grow its meat without killing a whole animal? My feeling on this is that in this circumstance nobody could justify killing and eating animals.
So are we destined for a future of ethical meat eating?
Postscript: The other question this raises is: what about animals with a minimal nervous system like a snail say? How do vegetarians feel about eating these? A snail has about 20k neurons compared to about 100k for a fruit fly, 1m for a cockroach, 21m for a rat and 300bn for a human – according to this unsourced wikipedia entry. It seems to me that if you’re willing to swat a fly you should be willing to eat a snail.
Post-postscript: One other question raised is would it be ethical to eat human meat that had been grown in this way? Anyone for ethical cannibalism?