The Samovar


The surveillance society II
November 15, 2006, 11:44 pm
Filed under: Civil Liberties, Politics, Surveillance Society

Some more thoughts on the report from the Information Commisioner’s Office on “The surveillance society” (PDF).

My earlier entry concentrated on the dangers of the surveillance society when it functions ‘correctly’ (that is the technology works correctly, it does what it’s supposed to do and the systems aren’t abused). Now I want to concentrate on the almost inevitable problems that ensue when it doesn’t work correctly.

A simple example that doesn’t need much comment is CCTV. The government is in love with these, but

During the 1990s the Home Office spent 78% of it crime prevention budget on installing CCTV and an estimated £500M of public money has been invested in the CCTV infrastructure over the last decade. However a Home Office study concluded that ‘the CCTV schemes that have been assessed had little overall effect on crime levels’.

Another example:

This issue of misidentification on police databases was most recently illustrated when the Criminal Records Bureau revealed that around 2,700 people have been wrongly identified as having criminal convictions

This latter points brings us to the issue of the consequences of failures of surveillance systems in the surveillance society. That there will be failures is inevitable, and we need to think about the effect it has on people.

Whether a medical diagnostic, forensic or any other surveillance technique involving the probabilistic and/or predictive identification of targets yields false non-matches depends on two important elements: the sensitivity and specificity of the technology used. Sensitivity is the technology’s ability to identify relevant cases correctly. Specificity (also called selectivity) is the technology’s ability to exclude irrelevant cases correctly. Individual characteristics, organizational settings, test criteria, and domain-specific knowledge will yield different sensitivity and specificity outcomes. Sensitivity and specificity values also depend on the criteria set for the test for example whether ultrasound scans for Down’s syndrome in foetuses is carried out by a skilled or semi-skilled operator) and they tend to trade off against each other. Widening sensitivity means identifying a higher number of potential targets, but within that (necessarily) larger identified population there will be a higher number of borderline and falsely identified targets, so selectivity decreases. Hence no test is perfect, and the setting of sensitivity/specificity thresholds is as much a product of political, social and organizational factors as it is the technology. As such, it is wise to assume that a certain percentage of an identified population will be false negatives or positives. There are hence more values to discuss: the positive and negative predictive values of the test. Positive predictive value is the percentage of true positives among all test positives, negative predictive value correspondingly the percentage of true negatives among all test negatives. The predictive values of a test depend on the accuracy of the indicators on which the test is based.

DNA gives us a good example to work with, but the same applies to other biometric forms of identification.

For DNA, it has been assumed even in courts that DNA identification is infallible. However for forensic identification purposes, only a few small segments of the entire DNA string are tested and only series of repeated base pairs (called ‘stutters’) within the so-called ‘junk’ DNA are shown in the so-called profile. However whilst a negative DNA test seems to be near perfect tool for acquitting the innocent, false negatives being very rare, false positives are surprisingly likely and a positive DNA test might be met with far more scepticism than occurs in courts.

The claim that we sometimes hear about DNA testing is that there is a ‘one in a million’ chance of it being wrong. For the purposes of this article, let’s assume that this is basically correct, although this is in fact contested. This figure is the false positive rate, it means that if the test says that the DNA from the crime scene is the same as the DNA of suspect X then there is a one in a million chance that it isn’t the same DNA. Now at first this seems like very convincing evidence, but consider this. If the claim is true, then for any given piece of DNA from a crime scene there will be on average 60 people in the UK whose DNA will match it, and approximately 6,000 people worldwide (the population of the UK is 60m and of the world is 6bn).

Now in traditional investigations this is not so much of a problem because what would usually happen is that they would arrest someone based on their suspicions and evidence, test their DNA and find the match. It’s unlikely (although not impossible, and cases of this happening are I believe already known) that this would lead to a false match. However, once you have a national DNA database with a significant proportion of the population on file, the temptation is to take a bit of DNA, test it against the database and see if there is anyone who matches the DNA and lives nearby or whatever. Again, in each particular case this might be unlikely to cause an injustice, but if this approach to finding the criminal becomes widespread it becomes almost certain that people will be wrongly convicted based on false positive DNA identification.

All identification systems have false positive and negatives, and it is sometimes not clear that although these rates might be tiny, in a surveillance society those tiny chances could lead to problems being encountered incredibly often. Suppose we had a system of money transfers / identification based on say iris-recognition or fingerprint recognition. Suppose these systems only go wrong one time in a million (which is enormously out of proportion with current systems which do nowhere near as well as this). If 60m people were relying on this system at a rate of say fifty transactions a day (which is not unreasonable if you imagine these systems being used universally for things like access to services etc.), we would expect 3,000 people a day to be seriously inconvenienced (or worse) by the problem. If the problem is that you just can’t buy a packet of crisps, that’s one thing. If the problem is that you can’t use public transport that day, that’s a worse problem. If the problem is that you can’t access medical care for the day, that’s potentially a life-threatening problem.

The general problem is that these systems are efficient from the point of view of the amount of time and money it costs business and the government to do things, but they do so at the expense of causing significant problems to a significant minority. This seems to be a general issue affecting the surveillance society – it’s fine for most people most of the time (beneficial even), but creates and exacerbates problems seriously for a sizeable minority.

I contend that these systems really only significantly benefit the government bureaucracy and the profits of big business. They provide relatively little benefit to the ordinary person (usually just a minor saving of time), but create serious problems for a minority and subtly alter the way our society functions as a whole. It is therefore something we should oppose as strongly as possible. Political pressure needs to come from the bottom up on this issue.

Hmm, this has become a longer entry than I intended. I may have to write a third one.

Advertisements

3 Comments

Can you please tell me the source of your quotation “This issue of misidentification on police databases was most recently illustrated when the Criminal Records Bureau revealed that around 2,700 people have been wrongly identified as having criminal convictions”

thank you

merle gering

Comment by Merle Gering

Hi Merle,

Yes it’s page 29, section 9.11.6 of the Surveillance Society report from the Information Commissioner’s office (linked to at the top of the entry).

Comment by Dan | thesamovar

twitter.com/ptrilla I have been unjustly accused of being a convicted felon only because some thieves who stole my wallet commuted mortgage fraud credit card fraud check fraud loan escrow and probate fraud along with having me mistaken for commuting crimes I never commited so when the cops pulled me over as I was going about my day a list of hidden frauds which I was unaware of on my public record have finally exposed themselves when I got arrested for those issues based on the problems with computer systems and Internet database hackers.

Comment by ptrilla




Comments are closed.



%d bloggers like this: