Filed under: Civil Liberties, Politics, Surveillance Society | Tags: cctv, privacy, rfid
From The Register:
A school in Doncaster is piloting a monitoring system designed to keep tabs on pupils by tracking radio chips in their uniforms.
According to the Doncaster Free Press, Hungerhill School is testing RFID tracking and data collection on 10 pupils within the school. It’s been developed by local company Darnbro Ltd, which says it is ready to launch the product into the £300m school uniform market.
Hungerhill headteacher Graham Wakeling said the pilot was “not intrusive to the pupil in the slightest.”
Near-real-time use of nationwide CCTV may not be an option now, but the government would like it to be. The two main requirements, of course, would be a central database of every camera and a network allowing access to it from elsewhere than a local control room, shop till etc. Consider these repetitious grumbles from the report:
The [Data Protection Act] does not require CCTV systems to be registered – this is considered to be at the heart of all the problems…No effective systems for registration of CCTV are in place…
[There is] no central register of CCTV systems nationwide…
A system of registration is needed and an initial step towards this would be to create a database listing all CCTV schemes. Such a database would provide information such as location of cameras, their coverage…
Bingo. Step one to a real Bourne-style panopticon. And:
Only in a few of the more recent installations is there remote access… on almost every occasion where police need to view CCTV material, they first have to attend the venue… This is all prior to assessing if the CCTV has even captured the event…
This is assumed to be bad. Again, the top cops have plans:
The delays and difficulties outlined above need not arise if the live and stored CCTV systems were networked and the CCTV material was easily accessible… Consideration needs to be given to the expansion of the networks to include CCTV from shopping centres, transport and commercial CCTV schemes.
There’s even a note about plugging in the cams in the corner shop – strictly with the owner’s permission of course. And it comes with the admission that:
Security, access and audit trails need to be stringent and continuing management scrutiny of the security, access and audit trails will be essential.
No shit. This is actually worse than what Jason Bourne has to put up with, as the spooks would one day have no need to know where he was to start following him on camera. Rather, the second he drove the wrong car, used the wrong credit card – or maybe even just took down the top of his hoodie – ding! Nearby cams would swivel round and he would be followed in real time until the cold steel bracelets snapped shut on his wrists.
Honest, that’s the plan:
In future, as technology is developed… such a network will allow the use of automated search techniques (i.e. face recognition) and can be integrated with other systems such as ANPR, and police despatch systems… [there might also be links of] transport system cameras to travel cards [and] shop cameras to Electronic Point of Sale (EPOS) systems… actions can be triggered by associated events and post event CCTV images can be quickly searched against other events/data…
Filed under: Capitalism, Economics, Manifesto, Politics, Risk, Surveillance Society
Insurance. Not the most exciting of blog subjects, although that hasn’t stopped Michael Moore’s film Sicko from making $24m in the US alone. It is a subject that fascinates me though, for the simple reason that I don’t understand why people get insurance most of the time.
The nature of insurance is that on average you lose money by having it. It’s essentially just a gamble, and the bookie always weighs the odds against you. Now, there is a real reason to have insurance, which is in the case where you can’t afford the cost if the unlikely thing happens. Insuring your house against being burnt down is a good example of this. Most people can’t afford to rebuild their house from scratch if it gets burnt down.
The other end of the spectrum struck me when I bought a mobile phone a while ago. The salesman tried to persuade me to get insurance for it. Now, the insurance was £20 for one year, which on the face of it seems a fairly minimal cost for the satisfaction of knowing that if your phone is stolen you will get a nice new one. However, the phone only cost me £60, so for it to be worth my while getting that insurance, I’d have to have a 1/3 chance or more of losing that phone during the next year. Obviously a bad gamble. I didn’t get the insurance, and 5 years later the phone is still in my possession. Score £100 for me.
That mobile phone salesman made me realise that insurance is almost never worth having, because the consequences of a loss rarely justify the amount you end up spending on all the different forms of insurance. Take cars for instance. Now, you are legally obliged to have third party insurance, but anything beyond that is a total scam. If your car is stolen, you can buy a cheap replacement for a few hundred pounds. It won’t be flashy, but it will be functional. A few hundred pounds is often less than a single year’s insurance. Comprehensive insurance is even more of a scam, because you either have to have an enormous excess (which means you end up paying for most of the repairs yourself anyway), or you pay enormously high premiums. It gets more complicated, but the basic facts don’t change once you get into the arcanae of no claims bonuses, insuring your no claims bonuses, and the fact that even having insured your no claims bonuses making a claim will affect your premiums anyway. These points struck home with me when an old car of mine was stolen, and I realised (too late) that claiming for it was going to end up costing me more in increased premiums over the next few years than the amount they were paying out – by quite a lot.
So when is it worth having? Well, it’s worth having if you really can’t cover the costs of something going wrong: i.e. basically house insurance (but possibly not contents insurance), and, in the US, medical insurance. That brings me neatly on to my next point. I just saw Sicko the other day (recommended), and one of the points it makes very well is that US medical insurers will do anything they can to avoid paying out. In the event of an expensive claim (exactly the sort of claim that justifies having medical insurance in the first place), they will investigate everything about your claim. If you have ever said anything untruthful or inaccurate on an application form or on the phone to them, that will void your claim. If they can manage to persuade a doctor to rule that the treatment is too experimental or not guaranteed to work, they won’t pay out. And to ensure that they can persuade doctors to make these rulings, they pay bonuses to doctors proportionate to how many claims they reject. In other words, even when insurance really does matter (and with medical bills often in the tens or hundreds of thousands of dollars in the US, it really matters) it might end up having been money wasted.
Now, finally I’d like to twist this into a rant about capitalism in general, because, you know, I like to rant about capitalism. It’s my thing.
This story about insurance being essentially a scam, an enormous rip off, and one that disproportionately affects the poorest, is a sort of microcosm of the ruthlessness of capitalism. Because poorer people can’t cover losses as much as richer people, they are more in need of insurance. Perversely, this means that they end up (quite rationally) spending more of their money on insurance than wealthier people.
A more recent development is social sorting, where poor people actually get larger premiums or bills precisely because they are poor. I’ve written about this before, here, here and here. This sort of thing just underlines the fact that the nature of capitalism is that the poor get poorer, and the rich get richer. Now, this has always been true of capitalism, but for a while it was masked. The introduction of the NHS and the welfare state in Britain made capitalism slightly more humane, but it is being undermined, even though the NHS and the welfare state still exist, because of social sorting.
The problem is that as companies know more and more about us, they can extract money from us ever more efficiently. Not only can they do this, but in a competitive market they must do it if they can, because otherwise someone else will. Exploitation of every source of profit isn’t a choice for a capitalist in a competitive free market, it’s a basic necessity. So, assuming that it is profitable for a company to, say, offer cheaper insurance to “intelligent” people, they will all have to start doing it. The logic of capitalism then undermines many of what we think of as social goods. We think it is bad that smart people should be given cheaper insurance than others, because it’s not fair, and also because smart people probably have more money; we think it’s bad that poor people should pay more for the same thing than rich people, but that’s not what’s going to happen because it doesn’t fit in with profit seeking.
Finally, to go back to insurance, the consequence of insurance companies having ever more accurate information about us, and being ever better at evaluating our individual risk levels, is that it becomes self defeating. If you can predict entirely accurately who is going to have a heart attack, then there cannot be medical insurance against having heart attacks. Someone who isn’t going to have one won’t pay because he isn’t going to have one, and someone who is going to have one is going to have to pay anyway so why bother giving extra money to the middle man. Insurance companies have to get better and better at predicting this sort of thing to stay profitable, but by doing so they bring about their own demise.
In this situation, the only thing to do is to have national insurance schemes organised by the state. The purpose is not to spread your own risk (you can’t change who you are, or your congenital risk of heart attack for example), but to spread the good and bad fortune of our circumstances out amongst everyone. In other words, in the long term, effective insurance cannot be provided by a capitalist system, and the alternatives available to us are ruthless capitalism which by its internal logic must get more and more ruthless to stay profitable, or some sort of socialism.
If you have got this far, well done, I’m impressed! and I thank you. Please do write a comment, if only to say you made it to the end.
Researchers have figured out how to give an entire community a drug test using just a teaspoon of wastewater from a city’s sewer plant.
The test wouldn’t be used to finger any single person as a drug user. But it would help federal law enforcement and other agencies track the spread of dangerous drugs, like methamphetamines, across the country.
Filed under: Civil Liberties, Politics, Risk, Security, Surveillance Society, Terrorism
A while ago, Anthony Giddens wrote a piece on terrorism and security that I replied to rather light heartedly. Others wrote more serious replies – see my previous entry for links. Today he wrote a rather odd piece on CiF replying to comments on his last “dozen or so articles”. Obviously since he was replying to so much, his comments were little more than a reaffirmation of what he’d already said, but for what it’s worth, here’s my reply to what he said about terrorism and security.
Whatever some of the bloggers want, Brown won’t commit electoral suicide by lurching towards the traditional left. Moreover, he is correct not to do so.
It’s worth pointing out that terrorism and security is not a left/right political issue. Authoritarians and civil libertarians exist at all points on the left/right spectrum. This is just misdirection – a complete red herring.
For instance, he owes it to citizens to make sure that they are protected against the threat posed by global terrorism. As I said in my article on the subject – written well before the latest attacks…
I love it. As if the fact that some utterly hopeless incompetents entirely failed to carry out what would have been a rare terrorist attack with a fairly small number of casualties (a few days worth of traffic fatalities at most) supported his argument. As if one could draw conclusions about risk and probability from a singular event.
- the debate about security in relation to civil liberties hangs a great deal upon how serious one believes the threat actually is.
At first when reading this I thought it was odd that he understands that the case must be based on the actual threat given that his argument was based on hypothesis and supposition, but then I look a closer look at the words he used. “The debate,” he says, “depends upon on how serious one believes the threat actually is” – not on how serious it actually is, or on the basis of evidence, but on the basis of belief.
It has to be analysed in terms of risk, a subject of some complexity, which I have studied in detail for many years.
Yessss!! I love it when they use appeal to authority. It’s especially delicious when their own argument undermines that authority (“written well before the latest attacks”).
Most of the blogs on this issue were hostile to what I said, but I stand by it. Taking high-consequence risks seriously,
But they’re not high-consequence risks. The largest terrorist attack ever killed under 3000 people. That’s no joke, but as I point out again and again, it’s absolutely tiny in comparison to so many other risks we face.
and mobilising against them, are the conditions of reducing them to manageable proportions, whether they be those associated with global warming, avian flu, world financial meltdown or international terrorism. The more seriously we take each issue, the less chance there is of a destructive outcome; but then those who disagreed with the policy in the first place will always say: “You were scaring us unnecessarily – look, nothing significant has happened.”
It is entirely right that the issue of civil freedoms should continue to be intensely debated. The level of risk should be monitored in a continuing way. One contributor asks, what will happen to freedoms that have been in some part suspended when the threat of terrorism recedes?
I would also add that the threat – such as it is – isn’t going to recede for a very long time, so we should take note that changes to our society based on the threat of terrorism have to be considered semi-permanent.
It is a very necessary question. There must be regular reports made to parliament, which can be scrutinised in detail; an independent role for the judiciary in making judgments has to be sustained; public debate must continue. How far anti-terrorist policies might produce an Orwellian state is itself a matter of risk assessment;
Certainly, if one is going to give the government and police powers which could be abused there should be independent scrutiny to minimise the dangers. The point is that when the state itself is the potential threat, you can’t rely on it to make reports to itself and supervise itself. The effectiveness of an independent method of scrutiny depends very much on the precise details of who exactly is doing it, what their relation to those in power is, what powers they have to investigate and overrule state decisions, etc. Can the judiciary be relied on for this sort of role? I’m not sure one way or the other. Either way, a better way of minimising the risk of abuse of powers is to not grant those powers in the first place, and to put practical obstacles in their way so that future governments cannot give themselves greater powers. Not building a surveillance infrastructure would be a good start.
but such procedures, robustly applied, will keep such an eventuality as the remotest of possibilities.
The remotest of possibilities? How remote is this possibility compared to say, the threat of an effective nuclear, biological or chemical weapons based terrorist attack upon which he based the entirety of his original argument? Presumably he thinks it’s much more remote, but what is the basis of this claim? While there have been no examples of successful such attacks despite much will to use them, there have been plenty of examples of governments that have turned bad based on manipulating the fears of the governed.
That some contributors talk as though such a state is already here, while dismissing new-style terrorism as offering no significant threat, strikes me as absurd.
Filed under: Civil Liberties, ID Cards, Politics, Security, Surveillance Society, Terrorism
Function creep is a very useful concept for understanding government and surveillance. When a new technology is introduced to do one thing (one function), and is later used for an entirely different thing, that’s function creep. It often seems as though governments plan to bring in potentially unpopular technologies by exploiting function creep. It goes like this: the government wants to do X where X requires some new and expensive technology Y. Unfortunately for them, X is fairly unpopular and if everyone knows that they’re spending money on Y in order to do X then there’ll be a huge fuss about it in the papers. So what they do is invent a new and popular thing Z that also requires the technology Y. When they’re building Y they say it’s for Z, but all the time they have in the back of their mind that they’ll introduce X later on.
Function creep is one reason why civil liberties campaigners are so worried about ID cards. The government plans to introduce them as a non-compulsory thing which will only be used in ways that are useful to most people, or for purposes that are popular (like being nasty to immigrants, or catching terrorists). It won’t actually do those things effectively, but that doesn’t matter because that’s not what they’re really for. It’s really there to build a large database on everyone to make the job of the civil service and police that much easier, and it may also undergo function creep in the future to make it compulsory to have one, and maybe later than that to make it compulsory to always carry it, etc.
Today the BBC reports an interesting example of function creep in London.
Police are to be given live access to London’s congestion charge cameras – allowing them to track all vehicles entering and leaving the zone.
The reason given is terrorism:
Home Secretary Jacqui Smith blamed the “enduring vehicle-borne terrorist threat to London” for the change.
There is function creep going on at many levels here. The first is that an infrastructure of cameras built to help manage congestion in London is now going to be used for routine surveillance by the police. Would we have agreed to a network of cameras being built in order to spy on us all the time? Almost certainly not, but they can just apply function creep to a system that’s already there. In this case, it was almost certainly opportunistic rather than planned function creep.
There’s also a hint as to some planned function creep:
But they will only be able to use the data for national security purposes and not to fight ordinary crime, the Home Office stressed.
In other words: don’t complain about this on civil liberties grounds, we’re only going to use it on terrorists. For the moment.
This is suspect for two reasons. First of all, they might change their minds about it in the future. Alarm bells should be going off when they reassure us it won’t be used to fight ordinary crime, given that the actual dangers associated to ordinary crime are so much larger than the negligible threat of terrorism. Secondly, because they’re already using terrorism laws in ordinary police work:
Since 2001, some 436 people have been charged in relation to terrorism investigations. Almost 200 of these were under standard criminal offences such as conspiracy to murder.
And let’s not forget Walter Wolfgang, the Labour party member who was kicked out of the party conference and detained under anti-terrorism legislation for shouting the word “Nonsense!”.
To finish off with, the article also makes a passing reference to an earlier function creep:
Although charges are only in force at peak times, the system runs 24 hours a day, a TfL spokesman said.
In other words, the system was already being used as a de facto surveillance infrastructure – running when it had absolutely no need to in order to carry out its stated and original function.
Update (18 July 2007): And for anyone who thought I was being paranoid, only one day later plans to extend this scheme nationwide for use in fighting ordinary crime were leaked to the Guardian. SpyBlog has more in depth coverage.
The Beeb reports that:
A financial services firm in Japan has begun offering lower mortgage interest rates to “intelligent” customers.
This is somewhat similar to a story about water companies in Northern Ireland planning to give people with bad credit ratings less time to pay their bills.
For more on the “social sorting”, inequality and the long term dangers of this sort of thing, read my entry on the Surveillance Society.
Update: Bruce Schneier posts a story about how the NSF have awarded a grant to a company to research how to use Google maps photography to spy on our houses, for example to tell whether or not we have a pool (I guess that’s more realistic in the US than over here), and tie that in with other information marketers have on us. We need to be thinking more about the effect this sort of thing is going to have.
Filed under: Business, Civil Liberties, Internet, Politics, Surveillance Society
- Will students be able to opt out of this service?
- Will students data held by the university be given to Google as part of the deal?
- If so, how much of it?
- Will students get targeted advertising while at university?
- What is the value to Google in advertising revenue of this captive audience?
- What is the cost for the university of running its own email system?
- What are the implications of having the university part funded by advertising?
- Will staff email also run on Google’s systems?
- How would we feel about schools and other public sector organisations making arrangements like this with Google?
For background on my concerns about companies holding enormous amounts of data on everyone, see my previous entry on social sorting and the surveillance society.
See also this article in Trinity’s student newspaper.
Good things always come in trilogies. Think about it: Star Wars (the first three not the shitty second three); The Lord of the Rings. So too with blog entries about the report from the Information Commisioner’s Office on “The surveillance society” (PDF).
In part I, I talked about why the surveillance society is a bad thing when it works correctly. In part II, I talked about why it’s a bad thing when things go wrong. I’m going to finish with some thoughts about opposing it.
The report itself talks about regulating the surveillance society rather than opposing it per se, but it’s on a similar wavelength. In uses the concept of a privacy impact assessments (PIAs) and introduces a new concept of surveillance impact assessments (SIAs). Roughly speaking a PIA is an assessment that has to be carried for a new technology, law, or whatever, that looks at how it will affect privacy, and looks at what can be done to fix any problems. An SIA is the same thing but looks at the wider issue of surveillance rather than just privacy. Certainly, if we could require of the government and big business that they had to produce these assessments and act on them, that would go quite a way to mitigating the worst effects of the surveillance society.
I’m now done with saying what I think and what the report says, I want to know what others think about what we can do about this? Some ideas that occur to me, about which I would any comments:
As citizens we can try to understand these concepts, support governments that propose regulation like this, respond to public consultations, etc. But, these are all quite minor things. The key is to bring the issue to light. How can we do that?
Is it possible to oppose the surveillance society without addressing a whole host of other issues? Firstly, can we oppose social sorting (as described in part I) without opposing capitalism itself? The logic of capitalism puts profit over people, and the surveillance society facilitates this enormously. I’m not saying we shouldn’t attempt to soften the harshness of capitalism by addressing things like this, and insisting that we have to have a revolution to solve the problem. I’m just saying, can we meaningfully explain what is wrong with this aspect of the surveillance society without talking about why the logic of capitalism is itself wrong? The same question can be asked about the environment. Secondly, can we oppose the level of government surveillance without opposing their conceptions of crime and terrorism?
Finally, on a rather glum note, I think we should recognise that it is likely that we will fail to stop the surveillance society (which shouldn’t stop us from doing our best to try to stop it), and that regulation will be insufficient in practice. We need to know what we intend to do given that. As the report points out, in a surveillance society there is a limit to what an individual can do. It simply isn’t possible to know where your data is flowing from organisation to organisation, or department to department, for example. Despite this, there would presumably be some benefit to making people aware of the ways in which it operates, the likely consequences, and the things they can do to mitigate the effects.
My earlier entry concentrated on the dangers of the surveillance society when it functions ‘correctly’ (that is the technology works correctly, it does what it’s supposed to do and the systems aren’t abused). Now I want to concentrate on the almost inevitable problems that ensue when it doesn’t work correctly.
A simple example that doesn’t need much comment is CCTV. The government is in love with these, but
During the 1990s the Home Office spent 78% of it crime prevention budget on installing CCTV and an estimated £500M of public money has been invested in the CCTV infrastructure over the last decade. However a Home Office study concluded that ‘the CCTV schemes that have been assessed had little overall effect on crime levels’.
This issue of misidentification on police databases was most recently illustrated when the Criminal Records Bureau revealed that around 2,700 people have been wrongly identified as having criminal convictions
This latter points brings us to the issue of the consequences of failures of surveillance systems in the surveillance society. That there will be failures is inevitable, and we need to think about the effect it has on people.
Whether a medical diagnostic, forensic or any other surveillance technique involving the probabilistic and/or predictive identification of targets yields false non-matches depends on two important elements: the sensitivity and specificity of the technology used. Sensitivity is the technology’s ability to identify relevant cases correctly. Specificity (also called selectivity) is the technology’s ability to exclude irrelevant cases correctly. Individual characteristics, organizational settings, test criteria, and domain-specific knowledge will yield different sensitivity and specificity outcomes. Sensitivity and specificity values also depend on the criteria set for the test for example whether ultrasound scans for Down’s syndrome in foetuses is carried out by a skilled or semi-skilled operator) and they tend to trade off against each other. Widening sensitivity means identifying a higher number of potential targets, but within that (necessarily) larger identified population there will be a higher number of borderline and falsely identified targets, so selectivity decreases. Hence no test is perfect, and the setting of sensitivity/specificity thresholds is as much a product of political, social and organizational factors as it is the technology. As such, it is wise to assume that a certain percentage of an identified population will be false negatives or positives. There are hence more values to discuss: the positive and negative predictive values of the test. Positive predictive value is the percentage of true positives among all test positives, negative predictive value correspondingly the percentage of true negatives among all test negatives. The predictive values of a test depend on the accuracy of the indicators on which the test is based.
DNA gives us a good example to work with, but the same applies to other biometric forms of identification.
For DNA, it has been assumed even in courts that DNA identification is infallible. However for forensic identification purposes, only a few small segments of the entire DNA string are tested and only series of repeated base pairs (called ‘stutters’) within the so-called ‘junk’ DNA are shown in the so-called profile. However whilst a negative DNA test seems to be near perfect tool for acquitting the innocent, false negatives being very rare, false positives are surprisingly likely and a positive DNA test might be met with far more scepticism than occurs in courts.
The claim that we sometimes hear about DNA testing is that there is a ‘one in a million’ chance of it being wrong. For the purposes of this article, let’s assume that this is basically correct, although this is in fact contested. This figure is the false positive rate, it means that if the test says that the DNA from the crime scene is the same as the DNA of suspect X then there is a one in a million chance that it isn’t the same DNA. Now at first this seems like very convincing evidence, but consider this. If the claim is true, then for any given piece of DNA from a crime scene there will be on average 60 people in the UK whose DNA will match it, and approximately 6,000 people worldwide (the population of the UK is 60m and of the world is 6bn).
Now in traditional investigations this is not so much of a problem because what would usually happen is that they would arrest someone based on their suspicions and evidence, test their DNA and find the match. It’s unlikely (although not impossible, and cases of this happening are I believe already known) that this would lead to a false match. However, once you have a national DNA database with a significant proportion of the population on file, the temptation is to take a bit of DNA, test it against the database and see if there is anyone who matches the DNA and lives nearby or whatever. Again, in each particular case this might be unlikely to cause an injustice, but if this approach to finding the criminal becomes widespread it becomes almost certain that people will be wrongly convicted based on false positive DNA identification.
All identification systems have false positive and negatives, and it is sometimes not clear that although these rates might be tiny, in a surveillance society those tiny chances could lead to problems being encountered incredibly often. Suppose we had a system of money transfers / identification based on say iris-recognition or fingerprint recognition. Suppose these systems only go wrong one time in a million (which is enormously out of proportion with current systems which do nowhere near as well as this). If 60m people were relying on this system at a rate of say fifty transactions a day (which is not unreasonable if you imagine these systems being used universally for things like access to services etc.), we would expect 3,000 people a day to be seriously inconvenienced (or worse) by the problem. If the problem is that you just can’t buy a packet of crisps, that’s one thing. If the problem is that you can’t use public transport that day, that’s a worse problem. If the problem is that you can’t access medical care for the day, that’s potentially a life-threatening problem.
The general problem is that these systems are efficient from the point of view of the amount of time and money it costs business and the government to do things, but they do so at the expense of causing significant problems to a significant minority. This seems to be a general issue affecting the surveillance society – it’s fine for most people most of the time (beneficial even), but creates and exacerbates problems seriously for a sizeable minority.
I contend that these systems really only significantly benefit the government bureaucracy and the profits of big business. They provide relatively little benefit to the ordinary person (usually just a minor saving of time), but create serious problems for a minority and subtly alter the way our society functions as a whole. It is therefore something we should oppose as strongly as possible. Political pressure needs to come from the bottom up on this issue.
Hmm, this has become a longer entry than I intended. I may have to write a third one.
I think for me the most significant idea in the report was about profiling, or what is called social sorting. I want to explain the danger here with reference to an unrelated example, which is racial profiling in crime detection.
Suppose it were the case that black men were twice as likely to be drug users than white men. In fact, it is more like the other way round, but for the purposes of this example bear with me. Now, if the police have enough manpower to stop and search a fixed number of people for drugs, and they are judged according to the number of successful arrests they make (which are reasonable assumptions in this society), what is their best strategy? The answer is simple, they should devote all their manpower to stopping and searching black men. For the purposes of the argument, assume that stop and search is the only way drug use is uncovered. This would lead to a scenario where everyone who was in prison for drug use was black, even though they only formed a tiny percentage of the drug users in the country (because the black population of the UK is only 2%).
In this scenario, have the police done wrong? I think the answer is that given their mandate they haven’t done wrong. They have behaved in a way that objectively speaking maximises the number of drug users arrested. If they had a racially neutral policy, and stopped and searched black men as frequently as white men (for the purposes of this argument, I am ignoring women and non-whites who aren’t black to make it simpler), then they would arrest half as many people (and therefore leave many more potentially dangerous drug users free). If they did this, they would be criticised.
So what’s the problem? There are two. Firstly, this is manifestly not fair. A sign of a fair system would be if the proportion of black and white drug users in prison was proportionate to the proportion of black and white drug users in society as a whole. Secondly, this system leads to a further injustice in that white people would be free to use drugs (or at least carry them) without fear of being arrested, the law would essentially not apply to them.
So what’s the resolution? Well, that’s more difficult. If they were to do something like stopping and searching black men twice as often as white men then there would be some white drug users in prison and some black drug users, which would disguise the problem but not actually address it (whilst also decreasing the number of arrests made). I believe the correct answer here is that stop and search is not actually an appropriate strategy for the police to use on either black men or white men, and they should pursue people based on evidence. There are other critical points you could make though, for example why should we judge the police based on how many successful arrests of drug users they make?
Now how does this relate to the surveillance society? Well, one of the things that happens in the surveillance society is that based on the large amounts of information that businesses and the state gather on us, they profile us and assign us to categories. These categories affect our subsequent interactions with them. For example, as the report mentions, call centres will now make you wait for a longer or shorter period of time based on whether or not you are a good customer for the company (spend lots) or a bad one (spend little, don’t pay your bills on time). When this sort of profiling becomes pervasive, a sharp division will emerge between those who are profiled as good or profitable, and those who are profiled as bad. As the example above indicates, rational assessment will actually mean that this division will become disproportionate to the actual differences, further pushing it to the extremes. Life for the worse off will become even worse, and life for the better off will become even better.
The report puts it like this:
Social sorting increasingly defines surveillance society. It affords different opportunities to different groups and often amounts to subtle and sometimes unintended ways of ordering societies, making policy without democratic debate. As the section on urban infrastructure shows, invisible, taken-for-granted systems of congestion charging and intelligent public transit both sort the city into groups that can travel relatively freely and others who find travel difficult and at the same time can be used for crime control and national security. No one has voted for such systems. They come about through processes of joined-up government, utility and services outsourcing, pressure rom technology corporations and the ascendancy of actuarial practices.
So the surveillance society masks fundamental changes in the way we live our lives. These changes are equivalent to a hidden change in the aims of society. Since businesses must pursue profit over any public good, these powerful new techniques for profit maximisation will lead to a society which values profit above anything else in all spheres of life. Let me eludicate that.
At the moment, it would be unthinkable to designate a large percentage of the population as ‘the underclass’ and deny them access to – or at least degrade the quality of their access to – the products and services we all enjoy. But, in the surveillance society this happens in fact if not in name whether we like it or not, because it is more profitable for them to do so, and they have the ability.
Now I wanted to draw attention to this particular aspect of the surveillance society because it is one that is not so often talked about. People tend to be more interested in the abuse of surveillance systems rather than what happens when they function properly, which is just as bad if not worse.
There are various other points I’d love to make having read the report, but I don’t want this entry to go on forever. So instead I’ll say: if you have a whole day free go and read the report. If you have an hour free, read a bit of it. There is a section which first describes a hypothetical week in the life of a family in 2006, and then one in 2016. The first is based on surveillance systems that currently exist (with footnotes to explain what they are), and the second is based on those that have been proposed and on identifiable trends (also with footnotes explaining who has proposed them or where the trend was identified). Both descriptions are quite frightening.
I may write a followup post at some point with further thoughts on the report, and some views about campaigning against the surveillance society.