No One Leaves the Herd for Long

Much of what we call ‘common sense’ is fossilised thought; arguments and ideas left behind by brains long turned to soil. The world is round, wear a seatbelt, germs make you sick, democracy is good, it’s unacceptable to strike your spouse, it is acceptable to strike your child.[1] You’ll find people who disagree with each and every one of these assumptions but, to the extent that it’s possible to say this, they are all ‘officially’ true. They are ‘the way of things.’

Some of these assumptions are empirical and others are normative: the world is round, one ought to wear a seatbelt. For normative statements especially, it is consensus that determines ‘ought’, not whether something is inherently right or wrong. In Europe, it used to be common sense that a man should chastise his wife, nowadays it isn’t. In the UK, it’s still common sense that a parent can chastise their child but, in a few decades, it may very well not be. Slavery was once common sense but nowadays most people think benefiting from slave labour is unacceptable unless it’s sanctioned through a smartphone contract. Does that mean that slavery once was right but now is wrong?

The obvious concomitant of something being common sense is that most people haven’t really thought about it because the thinking has already been done for them. It’s cultural learning. How many people could say now how they know the earth isn’t flat? Or take the age of the earth: before 1600, a reasonably well-educated European would have cited the Bible to answer with 6000 years. In 2018, they would tell you that it’s vastly more ancient than that, citing something they remembered from school or saw on TV. In both cases, the prevailing view is based on the authority of an institution; church or science.[2]

Diet is a signal example of cultural learning and one of the most ingrained. The challenge of changing behaviour for the vegetarian and vegan (veg*n) movements has been to politicise common sense, to show that eating animal products is a choice rather than merely the way of things. This is done, for instance, by marshalling evidence on the consequences of animal agriculture; for the global environment, for people’s health, and for the animals, themselves. Veg*ns also seek to show the arbitrariness of according respect and consideration along species lines. In this latter respect, the veg*n movement is engaged in what may be the final stage of expanding the circle of ‘moral concern’ (to borrow Peter Singer’s phrase), which began with our own family or tribe and widened over millennia to include larger communities, nations, and — for some — all humans, irrespective of petty differences of skin colour, religion or culture. This development was anticipated by the utilitarian philosopher, Jeremy Bentham, when he wrote in 1823:

The day may come when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny. The French have already discovered that the blackness of the skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may one day come to be recognised that the number of the legs, the villosity of the skin, or the termination of the os sacrum, are reasons equally insufficient for abandoning a sensitive being to the same fate.[3]



The corpse of Jeremy Bentham at UCL (No, really)

Naturally, the veg*n project at every stage has encountered resistance and hostility, just as other great emancipatory campaigns, such as women’s rights, race relations, LGBT, and the peace movement have. This is understandable in each instance, since not only often have vested interests been at stake (property rights, the slave economy, the church, the military industrial complex) but so has people’s deeply ingrained common sense. I’ve experienced the annoyance, defensiveness, and hostility of flesh eaters on any number of occasions, face to face and online. I don’t usually bring the subject up anymore unless asked but even just being at a meal where I am not eating what others are eating can be taken as a silent rebuke.


Years ago, at a local food fair, I ran a stall giving away vegan cakes and savouries (granted a stall for free, we weren’t allowed to sell anything). For two long, hot days, we gave away hundreds of home-baked cakes and snacks to anyone who was curious (or thrifty). Many chatted with us (mostly women), some argued (mostly men), and others were apologetic for not being vegan but cited the usual justifications (‘my body needs meat,’ ‘I couldn’t give up cheese,’ ‘I once heard of a vegan who became ill,’ etc.). Almost without exception, all were pleasant. It’s hard to be aggressive while eating cake. The person who sticks in my mind, though, was an elderly man who never got as far as our stall. As he stamped past, he broadsided us with, ‘you’re not going to stop me eating meat!’ Never underestimate the ability of a cake stall to enrage the passer-by.

I think one explanation for this defensiveness and the precipitous descent into anger and incoherence I often see is that people are having to defend a behaviour they never consciously adopted in the first place. Veg*ns have made a reasoned decision to step outside the herd and adopt an opposing lifestyle. This means they’ve examined their behaviour and built some sort of intellectual case for changing it.[4] People who eat meat will likely never have taken the time to build a case for it because they just been following the herd. That means they’re most likely not equipped to offer anything but the thinnest, most sloganistic defence for their actions. They’ve brought a carving knife to a gun fight.

DefOmniBingoIn the early 21st century, most veg*ns are, I imagine, those who have adopted the lifestyle. As it involved a clear choice, they should be able to articulate their reasons for doing so. But as veg*nism grows, this will change. Firstly, we will see more second and third generation veg*ns who will have inherited the lifestyle from their parents. More broadly, I’m optimistic enough to believe that what I see as the latest stage of human ethical evolution will gather speed, just as other progressive social movements have. Vegetarianism has been ‘legitimate’ for decades and now veganism appears now to be moving from being a fringe pursuit to a recognised lifestyle, too.[5] I suspect the 3.5m indicated by one recent survey is an overestimate of the number of vegans in the UK but I don’t think it’s unreasonable to believe it’s now a seven figure number.[6] As veganism gains visibility, the current common sense about our current hellish tyranny over billions of animals each year will continue to crumble. The more popular veganism becomes, the easier it will be, and the even more popular it will become. Perhaps decades from now, a combination of increased ethical awareness, second and third generation vegans, and economic and environmental pressures will relegate the consumption of animals to a fringe pursuit; the equivalent of fox hunting or wearing fur today. Not to exploit animals will become the new common sense and those who deviate will be seen as reprehensible or at least objectionable. One day, it will be illegal entirely.

But will this be the utopia vegans dream about? The slaughterhouses will be demolished, save for a few standing as bleak witness to our past barbarism. The billions held in pens and cages waiting for a sharp, brutal death to end their short, agonised lives will be no more than pictures in history books and documentaries. The ‘eternal Treblinka’ will be gone.[7] Certainly, that will be a triumph for the animals with whom we share our world, for the planet, and for our health. A victory in all the ways that truly matter. But intellectually, will the vegan campaigners of the early 21st Century have won when the society of the 22nd Century is vegan but never stops to think why? Will a 5th generation vegan who has merely absorbed society’s values —who’s just part of the herd — be as ‘good’ or as ‘enlightened’ as their early 21st century great, great grandparent? Will we continued to be ‘enlightened’ when our good behaviour is not backed by a conscious understanding of why it’s good? Without an almost impossible state of constant reflection and reappraisal of our own beliefs, what are we but plagiarised people; patchworks of other people’s thoughts, beliefs, and struggles?

At least I can take comfort in the belief that none of this will matter to the animals themselves, who care nothing for what we think or say but only for what we do. So, here’s to auto-utopia. One day, my brain will have turned to soil, but I can at least hope that it will be part of a vegetable patch.



[1] You may think that you don’t think it’s acceptable for parents to hit children but ask yourself how you’d react in the street to a man slapping his wife compared with a mother slapping her child. You may not agree with either, but I suspect you would accept the latter.

[2] Naturally, this is not to say that both views have equal validity. There is no comparison between, on the one hand, Bishop Ussher’s study of the Bible, which led him to conclude the world was created on 22nd October, 4004BC, and, on the other, over a century of radiometric dating.

[3] Jeremy Bentham (1823) ‘Introduction to the Principles of Morals and Legislation,’ available at

[4] Unless they just don’t like meat, which is often true as well.

[5] See for example Dan Hancox ‘The unstoppable rise of veganism: how a fringe movement went mainstream,’ The Guardian, 1st April 2018, available at

[6] Olivia Petter, ‘Number of vegans in UK soars to 3.5 million, survey finds,’ The Independent, 3rd April 2018, available at

[7] This phrase is from Isaac Bashevis Singer’s ‘The Letter Writer’. ‘In his thoughts, Herman spoke a eulogy for the mouse who had shared a portion of her life with him and who, because of him, had left this earth. “What do they know — all these scholars, all these philosophers, all the leaders of the world — about such as you? They have convinced themselves that man, the worst transgressor of all the species, is the crown of creation. All other creatures were created merely to provide him with food, pelts, to be tormented, exterminated. In relation to them, all people are Nazis; for the animals it is an eternal Treblinka.’



I See No Ships…

Very often, when one points out the incessant and almost exceptionless thumping the mainstream media has given Jeremy Corbyn since (before) he was elected Labour leader, the response from his detractors is to blame Corbyn’s team for their poor media management. That the press is against Corbyn is a conspiracy theory or, if it is true, it’s a founding block in the edifice to ineptitude that is ‘Compo Corbyn.’ A savvier leader,  one with sharper suits and no bicycle clips, wouldn’t suffer so; he’d simply caress the jackals’ bellies until they sang ‘The Red Flag’ — while still finding time to single-handedly stop Brexit.

On Twitter, I’ve several times seen the following quotation from Enoch Powell invoked in support of this view:

For a politician to complain about the press is like a ship’s captain complaining about the sea.

But it’s a poor metaphor and a poor argument. Yes, the sea can be choppy and destructive; it can run you aground, leave you in the doldrums, or sink you altogether; but it has no agency or will. Whatever it does to you, it’s nothing personal. To think otherwise is the same superstitious ascription of intent that has led people to worship both sun gods and sons of god. So the metaphor fails because the press is not like the sea. My guess is old Enoch was never a sailor, not even on a river of blood.

The press most certainly can sink a politician and will often mean to do just that. Despite its name, the media is not a neutral medium, bestowing fair winds and misfortune without favour, through which politicians chart their course. To think that buys into the fish tale of the press as the ‘Fourth Estate,’ some more or less fair arbiter between political competitors. In fact, the media is largely the corporate media  — not an independent power centre but one largely subordinated to big business.

I’m not going to spend several thousand words unpacking this argument. If you’re new to it, read Manufacturing Consent by Ed Herman and Noam Chomsky or look at the work produced by Media Lens. In short (and to simplify) the media is a sub-department of business and is structured by its imperatives. This happens in two ways. The first is its structural dependence on advertising revenue. Looked at in simple, institutional terms, the bread and butter of a newspaper company is not selling newspapers but selling readers to advertisers. That’s why newspapers can be given away and why news websites hate ad-blocking. A celebrated historian of British newspapers, Francis Williams, asserted in 1958 that the press ‘would never have come into existence as a force in public and social life if it had not been for the need of men of commerce to advertise. Only through the growth of advertising did the press achieve independence.’[i] Note the use of the word ‘independence,’ there. It’s only intelligible when we recall that the principal threat to press freedom was once the state. There’s a whole history of state control and the radical ‘unstamped’ press that I shan’t go into here. It’s enough to say that the press gained its freedom from government at the expense of being owned by rich men.

The same criticism applies to the commercial broadcast media – it sells viewers’ attention to advertisers on whose revenue it depends. This view was endorsed as long ago as 1989 by the Economist, which noted that, since projects ’unsuitable for corporate sponsorship tend to die on the vine,’ the media ‘have learned to be sympathetic to the most delicate sympathies of corporations’.[ii]  In a 2000 Pew Centre for the People & the Press poll, about one-third of the 287 US reporters, editors, and news executives who responded said that stories that would ‘hurt the financial interests’ of the media organization or an advertiser go unreported. 41% admitted avoiding or moderating stories to benefit their media company’s interests.[iii] Even the influential right wing US radio pundit, Rush Limbaugh, hardly a fellow traveller of Noam Chomsky, agrees. A ‘turning point’ in his career came when he realized that ‘the sole purpose for all of us in radio is to sell advertising’.[iv]  In 2004, Patrick LeLay, the head of the French media giant TF1, described the purpose of his company thus:

…let’s be realistic: fundamentally speaking, the job of TF1 is to help Coca-Cola to sell its product … If an advertising message is to register, the viewer’s brain needs to be made available. The object of our programmes is to make it available: that is to say to entertain the viewer, to relax him and prepare him between the adverts. What we sell to Coca-Cola is an availability of human brain time.[v]

The second way that the media is subordinate to business is through a process of ideological filtering of its staff, which occurs from school through higher education and into the workplace. There is little need for advertisers or owners to actually tell journalists what they may or may not write because by the time they’re in the job for a while they will have internalised the ‘correct’ values. As Alan Rusbridger, late editor of the late Guardian, conceded several years ago in an interview with Media Lens,

I’m sure… that the pressures of ownership on newspapers is, is pretty important, and it works in all kinds of subtle ways – I suppose ‘filter’ is as good a word as any; the whole thing works by a kind of osmosis. If you ask anybody who works in newspapers, they will quite rightly say, ‘Rupert Murdoch’, or whoever, ‘never tells me what to write’, which is beside the point: they don’t have to be told what to write… It’s understood. I think that does work, and obviously the general interests of most of the people who own newspapers are going to be fairly conventional, pro-business, interests.[vi]

Or, as Noam Chomsky once said to Andrew Marr, ‘I’m sure you believe everything you’re saying. But what I’m saying is that if you believed something different, you wouldn’t be sitting where you’re sitting.’[vii]

It’s not a perfect system, as Hermann and Chomsky concede, but it is very effective.[viii] There will be occasional deviations by a few more independently-minded journalists, but the overwhelming weight of the system still favours the neoliberal consensus of the past forty years. And this isn’t to touch on the personal preferences of many journalists at the higher end who have done very well out of the current system and so have a class interest in keeping it.

It should be obvious, then, that the idea that a socialist party simply needs to manage the press better is a nonsense. The corporate media is not there to be won over, it can’t be ‘managed’ into giving Corbyn a fair hearing. In fact, once one understands how the media works, the burden of proof would rest with anyone those who claimed that it  wouldn’t be biased against Corbyn.

The only time the media has approached even-handedness with Corbyn was during the imposition of impartiality rules on broadcasters during the 2017 General Election campaign. For the BBC, these came into force on 3rd May, although for commercial broadcasters, they began with the announcment of the dissolution in Parliament, which was  27th April. Their coincidence with the upturn in Labour polling, as shown in the Britain Elects poll tracker, is striking. The Blue and Red horizontal lines represent Tory and Labour polling and my addition of the green vertical line shows when the OFCOM broadcasting rules came into effect.

Opponents of this line of thought will point to the Blair Governments and their far better treatment from the corporate media when compared with both Foot, Kinnock, and Smith before, and Brown, Miliband, and Corbyn afterward. It’s certainly true that Blair and Alasdair Campbell employed a thorough and systematic approach to managing the media, from the ‘Rapid Rebuttal Unit’ and the Excalibur computer, to combative press briefings and a deliberate campaign to ‘woo’ newspaper editors and previously ignored areas like women’s magazines. Yet Rupert Murdoch besieged Labour before and after Blair; it’s not tenable to believe that this changed merely because his editors had been bought a good lunch. Rather, New Labour were the Sun on Sunday to the Tories’ News of the World. New Labour’s real success was not to win over business but to capitulate to it. A genuinely socialist party can make no such concessions, which is why a cellar-full of Krug won’t win editors over to Corbyn. Hence, we see that, once again, old Enoch was wrong. The press is not the sea on which Corbyn sails, it’s a fleet of enemy ships.

Correction 9th August 2018

Following feedback in the comments, I have corrected a typo in which I incorrectly stated that Theresa May called the election on 27th of May. I have also clarified the timeline of events. For more details, see Eleanor Bley Griffiths ‘Here’s why the media is banned from reporting on general election campaigning while the polls are open,’ Radio Times 8th June 2017, available at

For OFCOM rules, see the ‘Election Reporting’ section of the Channel Four Producers’ Handbook:



[i] Quoted in James Curran and Jean Seaton (1981 [2010]) ‘Power Without Responsibility. Press, broadcasting and the internet in Britain,’ p. 4.

[ii] ‘Castor oil or Camelot?’ in The Economist, 5th December, 1987, quoted in Noam Chomsky (1989) ‘Necessary Illusions. Thought Control in Democratic Societies,’ p. 8.

[iii] ‘Fear & Favor 2000: How Power Shapes the News’, Fairness and Accuracy in Reporting Annual Report available at (accessed 06/08/2018).

[iv] Quoted in Pratkanis & Aronson (2001), ‘Age of Propaganda: The Everyday Use and Abuse of Persuasion’ p. 56.

[v] Cited in Ignacio Ramonet, ‘Final edition for the press’, in  Le Monde Diplomatique (English Edition), January 2005,  available at (last accessed 06/08/2018); Full quote available at (last accessed 06/08/2018). The full quote in French reads: ‘Mais dans une perspective business, soyons réaliste : à la base, le métier de TF1, c’est d’aider Coca-Cola, par exemple, à vendre son produit… Or pour qu’un message publicitaire soit perçu, il faut que le cerveau du téléspectateur soit disponible. Nos émissions ont pour vocation de le rendre disponible: c’est-à-dire de le divertir, de le détendre pour le préparer entre deux messages. Ce que nous vendons à Coca-Cola, c’est du temps de cerveau humain disponible’. My thanks to Daniel Simpson for the translation.

[vi] Media Lens (2000) Interview With Alan Rusbridger, Editor,The Guardian, available at (accessed 06/08/2018).

[vii] Andrew Marr interviewed Noam Chomsky for a series called ‘The Big Idea,’ which was broadcast on the BBC in February 1996. The thirty minute programme can be viewed here:

[viii] The model proposed by Herman and Chomsky has been criticised. James Curran (2002), for example, argues that the radical critique is ‘bedevilled by a simple “system logic”’, which assumes that ‘business-controlled media serve business’ thus ignoring or downplaying countervailing influences such as the need to maintain audience interest to remain profitable, the need to preserve their legitimacy, and the need to consider the ‘professional concerns of their staff.’ (James Curran ‘Media and Power’ p. 223).

Book Review: ‘Rose’ by Russell T. Davies


Rose – the novelisation

At 7pm on Saturday 26th March 2005, Doctor Who ran back on to Saturday night television: big budget, clever, confident — joyous — and looking an awful lot like Christopher Eccleston. It had been sixteen long years.

The series opener, Rose, was written by showunner Russell T. Davies and was the template for his reimagination of the concept. This April, a mere thirteen years after broadcast, BBC books resurrected the much-loved Target imprint to publish Davies’s novelisation of his landmark first script. Doctor Who fans know how to wait.

The novelisation as a form is widely regarded as nakedly commercial, derivative, and lacking any ‘literary’ worth but the Target novelisations of the original Doctor Who’s twenty–six–year run of stories hold a special place in the hearts of many older Whovians. In the age before video recorders, they were the only way to enjoy stories one had never seen, as repeats were rare and, barbarously, a number of the serials were erased or junked. It’s charming to see the Target style revived, from the cover illustration reminiscent of Chris Achelléos’s classics, to the lean prose and pleasingly kitsch chapter titles like ‘Descent into Terror’. Rose is not a work of literature (give that a few hundred years) but it’s very entertaining. For anyone who hasn’t read it (or caught up with their 2005 viewing), now’s the time I should say ‘spoilers, sweetie.’

Davies’s novel naturally follows the structure of his script but with the embellishments and reinstatements the written word affords. Following a prologue of a first chapter I’ll come to in a moment, the story opens on the humdrum life of the eponymous Rose Tyler: a nineteen-year-old girl waiting for her life to begin while she folds clothes in Henrik’s department store. She has a clueless but devoted boyfriend, Mickey; a brassy and overbearing mother, Jackie; and a deep ache for a life — and a self — that could be so much more. Then, one evening, a trip into the basement plunges her into a boundless and compelling new world: a bridgehead of Autons — killer plastic mannequins controlled by the Nestene Consciousness — and ‘that mysterious traveller in Space and Time known only as the Doctor’. Rose’s wait is over.


Rose Tyler and the Doctor. In the TARDIS. (Christopher Eccleston and Billie Piper)

As with the original episode, the story is told almost exclusively from Rose’s perspective: from the moment the Doctor takes her hand and says ‘run’, through his demolition of her job, attacks by a disembodied arm, plastic boyfriends, carnivorous wheelie bins, that impossible blue box, subterranean tanks of writhing alien fear, panic on the streets of London, to a final, life-defining choice. Rose was a smart reintroduction for a new generation that wisely avoided burdening the viewer with twenty-six years’ freight of back story. We learnt almost nothing about the Doctor aside from what he represents to Rose. The villainous Consciousness is actually on a return visit, having first plagued the Earth in the 1970 story, Spearhead from Space, but Davies wisely avoids having the Doctor mention even this. Fans will know but Rose (and casual viewers) didn’t need to. Almost all the exposition that is required is rather mischievously given to the character of Clive, the internet conspiracy theorist who has obsessed over the Doctor his whole life.


The Autons invade London: 2005

It was a very full forty-five minutes and it’s a full 197 page novel, expanding on the episode while remaining true to it. Davies writes with his customary brio and warmth, capturing the crackle of his original script well, right down to the short, energising declaratives (‘They ran!’ ‘The Nestene screamed!’) that peppered his scripts as stage directions. And there’s some sly ‘meta humour,’ such as when Rose wanders the basement of Henrik’s and hears, on a distant, ‘tinny radio,’ ‘some Irish comedian’s voice echoing in the dark;’ in reality, a live feed of Graham Norton mixed accidentally into the original transmission.[1] There’s also opportunity for Davies to indulge his fondness for set-piece destruction and gleeful slaughter. The climax of the TV episode, involving Autons massacring the residents of London, is expanded without heed to budget. Buses are overturned, the London Eye is sent into the Thames, washing MPs from their benches, and a regiment of dummies, from brides, to ballerinas, to fetishwear models, teem through the streets: decapitating, dismembering, and blasting all before them. And then there’s this, which surely would have made the Auton’s creator, Bob Holmes, raise his pipe in salute:

Every form of plastic felt an urge to move, tugging at a cellular level. An instinct to rise up and kill. Wires and panels and joints and plugs in kitchens and cars and computers and offices began a little dance. Cables yearned to strangle. Dolls grinned in anticipation of murder. Bags imagined suffocation. Nylon ropes knew their time had come. Laminated sheets of paper felt their edges sharpen into razors and prepared to spin. On deserted pacific islands, reefs of plastic bottles tumbled together to form giant, lurching, man-shaped idols, rearing up over the surf with no one to witness their birth.[2]

Rose is an action story but it’s with his characters that Davies always shines. His ability, with such economy, to craft real people where many writers would settle for walking props, is remarkable. Rose is and was the star and so requires little further elaboration, although she is perhaps allowed to be a little savvier in the novel than in the episode and is gifted a couple of lines that had been the Doctor’s. On Westminster Bridge, for example, he once explained how the TARDIS travels, she now deduces it. That aside, she remains Rose: smart, impulsive, selfish, compassionate, brave, nineteen.

Ironically perhaps, the least developed character in all of this is the Doctor, himself. Not once does Davies allow the reader to see anything from his point of view or be privy to his thoughts. He is alien and inaccessible and we know about him only what we are permitted. This is Rose’s story. It’s a bright move – the companion has always been our way in to the Doctor, our proxy. Davies uses this to fine effect, creating an intriguing puzzle of a central character. Jackie Tyler, too, needs little extra explanation beyond being ‘five foot nothing, age not relevant, karaoke champion of the Spinning Wheel, life and soul of the party but a monumental lightning storm when angry…’[3] When we see Jackie attempt to seduce the Doctor, we know her. We’ve all known a Jackie.


The Autons invade London: 1970

Davies borrows smartly from backstory and events depicted in subsequent episodes to add substance to familiar faces. Most substantially, we meet the caretaker at Hendriks who was merely a surname (‘Wilson’) in the finished episode. Here, and confined to only a prologue, he becomes Bernie Wilson and we know him. He’s a weak, seedy minor criminal, cast down to the basement for some small indiscretion years before, whose life is about to crumble under the weight of his greasy, picayune scheming. The merest brush with the Doctor’s world brings him his one moment of wisdom and then ends his desperation forever.


Russell T. Davies, showrunner.

Of all the characters introduced in Rose, it is Mickey Smith who grew most during Davies’s tenure. I never liked Mickey in that first episode – he seemed too much the cliché of the slightly wet, useless boyfriend and, in truth, I think Noel Clarke didn’t have a grip on the character in that first episode, either; playing him a little too much as a buffoon. That all changed in later stories, and Mickey Smith became one of the most rounded, believable, and well-played companions, ever. For this novelisation, it seems to me (and I may be completely wrong) that Davies faced a particular challenge. He couldn’t simply write the sketchy, comical Mickey of that first episode because that would ring false; yet nor could he have him fail to act as the established plot requires. You can’t rewrite history, not one line. So, I think Davies strikes a balance and gets it right. We see that the bitterness of Mickey’s tragedies has flowered into his compassion and humanity and his loyal, patient love for Rose. Yes, he still withers in culture shock where Rose blossoms. Yes, he still clings to her legs in fear because fifteen years of series history demands that he must; but fan readers know that this will be his making. We also see more clearly Rose’s genuine love and appreciation of him. We know that it won’t be enough to keep her from running into the TARDIS at the end, but Davies notably softens that rejection from the TV version merely by omitting a couple of lines. In the original, Rose kisses Mickey goodbye and thanks him. ‘For what?’ asks Mickey and Rose replies, ‘exactly’. In the revision, Rose simply says, ‘thank you’ and is instantly a kinder person.

We’re also introduced to Mickey’s previously unseen gang and his importance to them: Mook, Patrice, and Sally,

And Mickey was the centre of their lives. He’d been on the housing list at 16, and at 18 he’d been granted that holy grail, a flat of his own. The first thing he did, when given the keys to No. 90, was to prop that door open and make others welcome.[4]


The late Robert Holmes

They’re nicely drawn, likeable, and feel like they could  easily have become semi-regulars in some other draft of the series. I imagine they’d also infuriate the more reactionary fans as, by being reflective of modern London, they’re an unapologetically ‘diverse’ bunch: Mook Jayesundra, Patrice Okereke and Sally (formerly Stephen) Salter. In fact, combined with the scene in which Clive shows Rose the apparently different people who’ve held the title of the Doctor, I can’t believe RTD wasn’t gleefully winding-up the ‘PC-gone-mad’ pack. In the televised episode, the only Doctor Clive shows Rose is Nine. Here, Davies reaches into future-past, with the ‘man with two suits, brown and blue,’ the ‘tweed jacket and bow tie,’ and ‘a blond woman in braces;’ before giving us a ‘tall, bald black woman,’ and a ‘young girl or boy in a hi-tech wheelchair’. So that’s every box ticked in gammon-shaded blood, then. Somewhere, a big, gay Welshman is still hooting.

Another character given added weight is Clive Finch, the comical internet conspiracist theorist played with such charm by Mark Benton. Davies provides an intriguing explanation for Clive’s obsession with the Doctor and reaches back into series mythology, in this case to an iconic death in the 1988 classic, Remembrance of the Daleks. Clive’s death as the Autons ravage London is the more tragic because it is the heroism of the ordinary man, augured by an epiphany:

All of Clive’s fantasies were now becoming facts, right before his eyes. But if the glories were true then so were the terrors… To protect his wife and children, Clive simply opened up his arms. He would greet the dummy in friendship, or stop it with his body, whatever it took. And he found himself smiling, even as he started to cry. Because here it was at last. Adventure.

Here we see a restatement of the novel’s central theme: the adventure — and cost — of touching the Doctor. Clive never meets him, yet contact through his dead father and Rose is enough. We see him become like the Doctor by moving to embrace the Other while also being preparing to defend against it. The Doctor ruins his life and redeems it, all without ever knowing who he is. Bernie Wilson never meets the Doctor but touching his world opens up his own, even at the cost of his life. And when Rose Tyler takes the Doctor’s hand, he shows her the possibility of the universe and the possibility within herself.



[1] An off-air live audio feed from BBC3’s ‘Strictly Dance Fever’ was mixed accidentally into the BBC1 audio; marking the first of Graham Norton’s two unwanted intrusions into the programme (the second occurring in 2011).

[2] Rose, p. 170. Yes, I know plastic doesn’t have cells.

[3] Rose, p. 25.

[4] Rose, p. 52.

Tales of Self-Laid Eggs

Nadine Dorries tweet

A fundamental misunderstanding of cause and effect.

Last month, news-sellers reported that BAE Systems (Purveyors of Finest Quality Death to the Gentry) had won a contract to flog the Australian government nine new warships, which will ‘provide the Australian Defence Force with the highest levels of lethality and deterrence.’[i] British companies will supply a number of the internal systems and so play a very real role in Australia’s ongoing battle to repel the onslaught of drowning asylum seekers.[ii]

As one might expect, a clutch of Brexiteers, led by the Ragged-Trousered Stockbroker, Nigel Farage, took time from managing their foreign citizenship claims and overseas investment funds to trumpet this £20bn victory for Global Britain. Nadine Dorries MP lauded it as an example of just the sort of trade deal (completed while we’re still a member of the EU) that the EU (of which we are still a member) has prevented us from doing (it hasn’t).

In fact, though a BAE design, the ships will not be built in the UK at all, but in Australian shipyards, and Britain will receive only a slice of the £19.6bn headline figure. But I’m not concerned here with Brexit, Tory boosterism or even exactly how much national pride should attach to selling engines of nautical slaughter. Instead, consider this caveat, tucked away in the analysis by the BBC’s Scottish business editor Douglas Fraser,

However, this looks like a design which was heavily subsidised by the UK taxpayer, being sold overseas, and wholly to the benefit of BAE Systems. It appears that the UK taxpayer sees none of the direct payback or royalties from that investment.[iii]

This is not unusual. There is a long record, in the US and UK, of the public sector incubating and subsidising private sector success stories; something that the champions of capitalism generally try to hide under a thicket of ‘free market’ euphuism. They prefer instead the ideology of the ‘self-made man’ (or company) that rises to prominence and wealth through nothing but their own vision and hard work. Sometimes, this pretence requires the most preposterous elision; take for instance Philip Anschutz, who the Forbes 400 Rich List in 1998 described as ‘self-made’ even though he had inherited an oil and gas field worth $500 million.[iv]

More generally, there is a long story of the public sector supporting and protecting the private sector and free market. I’ll list a few examples that I don’t have space to discuss: developing and promoting a culture of property rights and, later, intellectual property rights; providing infrastructure, such as roads, railways, ports, power, and communication; providing an educated workforce through a public school system; subsidising low wages through a welfare state; underwriting risky overseas sales (e.g. British export credit guarantees); offering tax breaks and inducements for investment (such as export processing zones); privatisations, bail-outs (such as of the banks in 2008), treating work- or product-related illness;[v] repairing environmental damage; and providing cheap fuel through periodic liberation of oil supplies.

The most obvious form of public sector support for the private sector, and the one that has the worst reputation, is to prevent a free market at all through protectionism: the use of tariff and non-tariff barriers to prevent overseas competitors trouncing one’s domestic industries. While it’s officially denounced – especially by enthusiastic practitioners such as Reagan and Trump –  it’s fair to say that protectionism has characterised US and UK industrial development (and elsewhere); not least through the acquisition of empire.[vi] The US began to champion freer trade only following World War II; at least partly fulfilling the prediction of its 18th president, Ulysses S. Grant, that ‘within 200 years, when America has gotten out of protection all that it can offer, it too will adopt free trade.’[vii]

Even when they don’t protect industries from international competition, governments still provide considerable support in other ways. Despite the fall of communism and the ascendance, until 2008 at least, of ‘free market’ ideology, it’s accurate to say that western capitalist societies still have substantially planned economies. Most obviously, governments plan economies through state-owned enterprises, through Research & Development (R&D), infrastructure spending, and through sectoral industrial policy. Additionally, modern corporate capitalism ensures that a handful of enormously powerful transnational corporations plan their activities, often in concert (often in conflict) with governments.[viii] It’s R&D spending and the use of government purchasing that I’m going to discuss here.

In the UK and particularly the US, government spending on scientific and ‘defence’ R&D has been enormous. For instance, between the 50s and 90s, US federal government spending accounted for 50-70% of the country’s entire R&D spending.[ix]  As late as 1958, federal funding covered an estimated 85% of total R&D on electronics.[x] In the 1950s and 60s, the Pentagon supplied more than 30% of IBM’s R&D budget.[xi] Mariana Mazzucato, in The Entrepreneurial State, summarises the history of hi-tech as one in which ‘nearly all the technological revolutions of the past – from the Internet to today’s green tech revolution – required a massive push from the state.’[xii]

The US’s Defense Advanced Research Projects Agency (DARPA) is a fine example of the US Government incubating hi-tech before it’s released to the market. DARPA was set up in 1958 to give the US ‘technological superiority’ in multiple sectors of its economy and has always been ‘aggressively mission-oriented’ rather than merely profit-oriented. With a budget of $3bn annually, it is structured to ‘bridge the gap between blue-sky academic work, with long time horizons, and the more incremental technological development occurring within the military.[xiii] 

Going way beyond simply funding research, DARPA funded the formation of computer science departments, provided start-up firms with early research support, contributed to semi-conductor research and support to human-computer interface research and oversaw the early stages of the Internet… such strategies contributed hugely to the development of the computer industry during the 1960s and 1970s, and many of the technologies later incorporated in the design of the personal computer were developed by DARPA-funded researchers[xiv]

Early achievements for DARPA were ‘key technologies’ such as ‘high-speed networking, advances in integrated circuits, and the emergence of massively parallel super-computers’.[xv] Such was its success that, under the first Clinton Administration (1993-96), DARPA became the ‘lead agency in a new effort to help fledgling technologies gain a hold in commercial markets.’[xvi] In the 1970s, DARPA funded a laboratory affiliated with the University of Southern California where anyone who believed they had developed a superior design of microchip could get it fabricated to prototype stage. By so doing, the state subsidised the birth of personal computers in the 1970s, the first of which Apple introduced in 1976.[xvii] As the New York Times reported as far back as 1989, ‘many fundamental computer technologies… can be traced to [DARPA’s] backing, including the basic graphics techniques that make the Apple Macintosh computer easy to use’.[xviii] More of Apple in a moment, but let’s also note that much of this spending was disguised (or at least rendered more ideologically palatable) by being conducted by DARPA. The NYT again:

Under the rubric of national security, the Pentagon can undertake programs like Sematech [a research consortium to help the US semiconductor industry compete] that would arouse opposition if done by another agency in the name of industrial policy…[xix]

And DARPA is not the only instrument of government support. Mazzucato discusses several more, including the Small Business Innovation Research (SBIR) Programme and the Orphan Drug Act. Founded in 1982, SBIR plays an increasingly influential role as the first port of call for entrepreneurs looking for funding and, with a budget of $2bn annually, has ‘guided the commercialisation of hundreds of new technologies from the laboratory to the market.’[xx] The Orphan Drug Act of 1983 provides tax incentives, R&D subsidies, fast-track drug approval and strong intellectual property and marketing rights for products designed to treat conditions suffered by fewer than 200,000 people. This support played an important role in the development of major players, such as Biogen and Genentech, but has also successfully been exploited by giants such as GlaxoSmithKline, Roche, and Pfizer.[xxi]  DARPA, SBIR, and the Orphan Drug Act are just three, very large, programmes of market intervention that the US has run over decades.

Let’s go back to Apple for a moment. Discuss the achievements of free market capitalism on Twitter for more than ten minutes and someone will be bound to hold up the ubiquitous iPhone as clinching proof that the profit motive leads to shiny, unscratchable utopia.  Mazzucato makes Apple’s flagship a centrepiece of her study and devotes an entire chapter to tracing the origin of almost its every bell and whistle to the public sector. As a ‘smart’ phone it would be nothing without the Internet; the earliest incarnation of which (ARPANET) was developed by DARPA in the late 60s (with a parallel system built by the National Physical Laboratory in the UK). Touchscreens can be dated back to the work of E. A. Thompson at the Royal Radar Establishment in Malvern in the 1960s and the Centre for European Nuclear Research (CERN) in the 1970s.[xxii] Siri began life as the SRI-led Cognitive Assistant that Learns and Organizes (CALO) project within DARPA’s Personalized Assistant that Learns (PAL), a joint programme with the Swiss Institute of Technology (EPFL). SRI spun off Siri in 2007 as a commercial venture and Apple bought it in 2010, integrating it into the iPhone 4S in 2011.[xxiii] LCD screens were first created by Westinghouse in the 1970s and the work was funded almost exclusively by the US Army when companies such as Apple, 3M, IBM, XEROX, DEC, and Compaq refused to take the risk. The Lithium Ion battery was developed with government funding and the cornerstones of the World Wide Web (HTTP and HTML) were first implemented at CERN. Finally, GPS began life as NAVSTAR, a strictly military use system, to this day still funded by the US Airforce.[xxiv] For a fuller view, consider this schematic:


Taken from Mazzucato (2013[2018), p. 116

So that’s a sample of the US picture. Over the pond, there is Innovate UK, which in 2016-17 had a budget of £561m and, through competitions, awarded grants of between £250K and £10m to businesses and research organisations working on emerging technologies; health and life sciences; infrastructure systems; and manufacturing and materials. The London Co-Investment Fund supports start-ups in the capital and disburses money from a purse including £25 million from the Mayor of London’s Growing Places Fund. Up until 2015, the Government also provided discounted broadband to 50,000 businesses.[xxv]

As of 2017, the British government (like the US) is ‘pouring billions of pounds’ into Artificial Intelligence research, 5G, and driverless cars. ‘Investment in electric vehicles,’ reported Cnet last November, ‘includes £400 million for a charging infrastructure fund, an extra £100 million in Plug-In-Car Grant, which subsidises purchases of electric vehicles, and £40 million in charging R&D.’ This government spending, which also includes more computer science teachers in schools, is to ‘help businesses grow to scale and hopefully find the UK’s next tech unicorn.’[xxvi]


E. A. Thompson’s early touchscreen

A notable difference between state investment and private investment is that the state provides ‘patient capital’ while the private sector is ‘impatient’.[xxvii]  The state takes the long-term view, often sinking large sums into areas that are merely theoretical. In this sense, it deals with uncertainty rather than merely risk. Risk is quantifiable and can be priced into business decisions. Venture Capitalists (VCs) can deal with risk and accept a certain amount of it; a quantified possibility that a given investment won’t come off.  Uncertainty, conversely, cannot be quantified or priced into a business venture. It’s the ‘unknown unknowns’ that may mean years of patient research lead into a wall. Much government investment occurs long before VC comes into play; using public funds to gradually carve eldritch clouds of uncertainty into a still risky but, at least defined, landscape upon which a market can be built.

The Internet and nanotechnology are both examples of this process. The market had no interest in either because they were too long-term (‘blue sky’ as the jargon has it). There was no clear idea of a product, a demand for that product, or the attendant risks. There was only uncertainty. What was required was mission-oriented rather than profit-oriented effort. Similarly, it’s highly unlikely the market would ever have put a man on the moon. There was little obvious commercial opportunity, too much basic research required, and the uncertainty was simply too high. It took the public sector — the vast sums of money, the herculean intellectual effort, and the terrible sacrifice of life — to conquer that uncertainty and create a world in which, decades later, Elon Musk could spend millions proving that no black hole sucks as hard as an arsehole.


Welcome to Earth. Intelligent pop: 0

The state doesn’t merely incubate products by funding their development or the science that leads to them. Government can be the main, if not their only, customer. The US Government is the ‘single largest purchaser of goods and services in the world’ and a ‘vital source of business for companies…’[xxviii] To take a past example, Fortune Magazine conceded in 1948 that ‘the aircraft industry today cannot satisfactorily exist in a pure, competitive, unsubsidized, “free-enterprise” economy.  It never has been able to. Its huge customer has always been the United States Government, whether in war or in peace.’[xxix]  As late as 1968, the US military bought 40% of all semiconductor production and the willingness of the US Government to buy processor chips ‘in quantity at premium prices allowed a growing number of companies to refine their production skills and develop elaborate manufacturing facilities.’[xxx] In 2016, the US Government became the top purchaser (along with private households) of healthcare products, spending $918.5bn annually.[xxxi]

In the UK, the government ‘acts as a significant purchaser in various sectors of the economy,’ with the two ‘stand out’ areas being pharmaceuticals and defence.[xxxii] Since 1957, the UK Government has regulated the price of pharmaceuticals with a policy, which (since 1969) has also had as its objective ‘a strong and profitable pharmaceutical industry’.

Participation by drug companies is voluntary, but universal. Every five years the government sets out a price trajectory that is designed to provide a reasonable rate of return, while ensuring value for money for taxpayers.[xxxiii]

The policy is seen as a success, in that it has kept prices down for the consumer, but is also believed by some experts to have been ‘critical in explaining the difference between the success of British pharmaceutical firms and the failure of their French rivals.’[xxxiv]

In defence, the Government is essentially the sole customer because our exports are comparatively slender. According to an evidence paper submitted to the UK Government’s ‘Foresight Future of Manufacturing’ project in 2013, ‘government purchasing decisions in defence have directly led to the maintenance of a defence sector of reasonable size’.[xxxv] The authors note that, while expensive, the system is successful in that it at least allows Britain to ‘preserve some modicum of military independence.’[xxxvi] Interestingly, they also argue that since foreign exports are so limited, policy in this area should be seen as being about preserving domestic military production capability and so a part of defence rather than industrial policy. In which case, one might wonder why we recycle larges sums of public money into private profit when these companies who are effectively sub-departments of the state.

All of the forgoing raises an obvious question. What does the public sector get in return for its investment; for all the forms of support we’ve discussed? It’s an axiom of business that those who take risks should also take a fair share of the reward when those risks pay off. For the state, this could take two forms. One would be a direct return on the investment made in a new technology, product or supportive measure. This very often does not happen; costs might be socialised, but profits are largely privatised (or the money is squandered). Where was the return on the public sector’s investment in computing or the Internet? SIRI cost at least $150m to develop and, while Apple paid hundreds of millions for it, that money did not go back to the American taxpayer but to the spun-off company that owned it and some VCs who put in an extra $24m late in the development process.[xxxvii]  Take for another example the US telephone companies. As David Rosen wrote in 2013 for Counterpunch,

They’ve pocketed an estimated $360 billion through questionable rate increases, subsidies, tax breaks and overcharges.  Instead of building out the “information superhighway” promised by Al Gore two decades ago, they directed the money to building-out 2nd-rate wireless businesses, overpaying their executives and rewarding stockholders – and all at the customer’s expense.  As a result, the U.S. has become a 2nd tier communications nation, ranked 15th in broadband.[xxxviii]

One can argue that jobs (effectively state-subsidized jobs) are created, but hi-tech firms in particular specialise in producing their goods offshore and for low pay. For example, Mazzucato cites figures estimating that the top nine executives working for Apple together pocketed in 2012 the same amount of money that it took 95,000 of their workers to earn.[xxxix] And we should all remember that jobs are not a gift or a favour from business – they are a transaction, in which the employee comes off worse.

Of course, the main way that the public sector should recoup its investment in the private sector is through taxation and here, dear reader, we hardly need tarry for long. The headline stories of the likes of the GAFA companies (Google, Amazon, Facebook, and Apple) distract from a far larger story of big business avoiding, evading, and lobbying-away tax that I’m not going into here. It suffices to say that the current controversy over large companies not paying their fair share of tax isn’t merely about the state imposing duties on companies in order to fund its expenditure. Rather, it’s often a case of payback: companies returning on the investment the public sector has made, if not in them directly, then in creating the arena in which they operate. The GAFA organisations only exist because of the public sector. It was the American and British state that created the personal computer, the Internet, the World Wide Web, and the capacity to process ‘Big Data’ on which Facebook and Amazon rely. It was the American state, via the SBIR Programme, that provided Apple with its start-up funding. It was the American state that created the Backrub search algorithm on which Google is based.[xl] And it is the state that keeps them safe, builds roads for their customers to reach them, ports and railways for their suppliers to stock them, educates and cares for their workers, and — through welfare payments — subsidizes their wage bill.

What can we conclude from all of this? Five things, I think. Firstly, that the stereotype of the bold, dynamic private sector versus the conservative, staid public sector often reverses the truth. History shows the public sector very frequently to be far more adventurous and farsighted than the private sector. It’s the first dragon in the den: there on the ground floor, thinking out of the box, looking up at the blue sky and scanning the horizon, generating the thought shower, running with it, then taking it to the next level, and not just going for the low-hanging fruit.  It’s Big Business’s mentor, its patron, its partner, and its best customer. We’ve seen how the state is a heavy investor in innovation but, more than that, the public sector is space in which the market is born and thrives. Without the state clearing the ground and guarding the perimeters, there’s nowhere safe to put the market.

Secondly, the conservatism of the private sector is driven by its need to keep one eye on the bottom line, the quarterly return. While the state, at its best, can be driven by a mission, corporations are powered by the fiduciary duty; the need, above all other considerations, to make money for their shareholders.[xli] Yes, there are genuine entrepreneurs, people with a dream, and start-ups with a vision, but corporations as legal entities care only about making the next buck. Putting the argument at its strongest, there can be no sense of public service among these paper psychopaths.

Thirdly, all economies are planned by somebody. Pretending that ‘leaving it to the market’ means that one’s economy is not planned is disingenuous. Rather, the question should be who does the planning: democratically-elected government at the national level and workers’ councils lower down or barely accountable private capital driven by profit?

Fourthly, its past time for an accounting of the true role of the public sector in the world we see around us and carry in our pockets. Not only that, but the investment of workers in innovation should be properly understood, acknowledged, and rewarded – rather than merely perpetuating a culture in which people are told to just shut up and be grateful for the gift of employment.

Finally, the giants of the private sector must be made to realise that they’re cutting away the branch on which they sit. By avoiding tax, and contributing to the hollowing out of the state, concentrated private capital is increasingly parasitic on a withering public sector. And I do mean parasitic rather than merely symbiotic, since the parasite is in danger of killing its host and, before that, of cutting the vital stream of nourishment that keeps it alive: basic scientific research. The less material capacity and ideological freedom the state has to imagine, research, invest, and — yes — often fail, the less fruit will be there for the likes of Apple to pluck. The well of ideas will run dry. The golden eggs need to take better care of the goose that laid them.



“Self made men, indeed! Why don’t you tell me of the self-laid egg?” is a quotation attributed to the political scientist, Francis Leiber, in 1882

[i] BBC News ‘BAE wins multi-billion pound Australian warship contract,’ 29th June 2018, available at (Accessed 08/07/2018).

[ii] See Jonathan Pearlman ‘Australia sends in its navy to push asylum-seeker boats back to Indonesia,’ The Telegraph, 7th January 2014, available at (Accessed 12/07/2018); Ben Doherty and Calla Wahlquist, ‘Australia among 30 countries illegally forcing return of refugees, Amnesty says,’ Guardian 24th February 2016, available at (Accessed 12/07/2018); Mark Isaacs ‘There’s No Escape From Australia’s Refugee Gulag,’ Foreign Policy 30th April 2018, available at (Accessed 12/07/2018)

[iii] BBC News, op. cit.

[iv] Responsible Wealth (2004 Press Release) ‘Forbes 400 Richest Americans: They Didn’t Do It Alone’ 24th September 2004, available at (accessed 09/07/2018)

[v] To give just one example, according to one estimate, between 2000 and 2004 in the US smoking caused more than $193 billion in annual health-related costs, including smoking-attributable medical costs and productivity losses (cited in David Rosen ‘Socialize Costs, Privatize Profits,’ Counterpunch, March 1st, 2013, available at  (Accessed 09/07/2018) ).

[vi] See Ha-Joon Chang (2007) “Bad Samaritans. The Guilty Secrets of Rich Nations & The Threat to Global Prosperity,” chap. 2.

[vii] Chang (2010), pp. 55-67.

[viii] See Ha-Joon Chang (2010) “23 Things They Don’t Tell You About Capitalism,” pp. 199-200.

[ix] Chang (2007), p. 55.

[x] Laura D’Andrea Tyson (1992) ‘Who’s Bashing Whom?: Trade Conflict in High-Technology Industries,’ p. 90.

[xi] Winfried Ruigrock and Rob Van Tulder (1995) ‘The Logic of International Restructuring,’

  1. 220-21, quoted in quoted in Michael M’Gehee ‘Free Market Capitalism and the Pentagon System,’ Znet March 30, 2010, available at Note that this may not be the correct authorship of the article as the url attributes it to a Donald M. Ferguson.

[xii] Mariana Mazzucato (2013 [2018]) “The Myth of the Entrepreneurial State. Debunking Private vs Public Sector Myths,” p. 6.

[xiii] Mazzucato (2013 [2018]), p. 81 DARPA is also often referred to as ARPA, dropping the ‘Defense’.

[xiv] Mazzucato (2013 [2018]), p. 82

[xv] Elizabeth Corcoran, “Computing’s controversial patron,” Science, April 2, 1993, p. 20, retrieved from  (07/07/2018)

[xvi] Corcoran, op. cit.

[xvii] Mazzucato (2013 [2018]), p. 84

[xviii] Andrew Pollack, “America’s Answer to Japan’s MITI,” New York Times, March 5, 1989, section 3, p. 1, quoted in M’Gehee (2010).

[xix] Pollack, op. cit.

[xx] Mazzucato (2013 [2018]), pp. 85-86.

[xxi] Mazzucato (2013 [2018]), pp. 87-88. Mazzucato notes that, as the act allows multiple versions of effectively the same drug to be designated ‘orphan’, Big Pharma has been able to clean up at public expense. She cites a drug developed by Novartis for chronic myelogenous leukaemia that, when marketed as a treatment for four other conditions, received the same designation (and support) each time.

[xxii] Johnson described his work in an article entitled ‘Touch display—a novel input/output device for computers,’ published in Electronics Letters. For more of the history, see Florence Ion, ‘From touch displays to the Surface: A brief history of touchscreen technology,’ ARSTechnica 4th April 2013, available at (Accessed 12/07/2018).

[xxiii] SRI International, ‘SIRI’ undated, available at!&innovation=siri (Accessed 10/07/2018).

[xxiv] Mazzucato (2013 [2018]), p. 6, chap. 5.

[xxv] Scott Carey, ‘How the UK government supports technology start-ups | How to get government backing for your start-up,’ techworld, 11th January, 2017, available at (accessed 08/07/2018).

[xxvi] Katie Collins, ‘AI, 5G, driverless cars on the government’s tech agenda,’ Cnet, 22nd November 2017, available at (accessed 08/07/2018).

[xxvii] Daniel Cichocki ‘Impatient for growth? Time to unlock Patient Capital…’ UK Finance, 27th November 2017, available at (accessed 11/07/2018).

[xxviii] K&L Gates Public Policy and Law Practice ‘Government Contracts and Procurement,’ 2011, available at (accessed 13/07/2018).

[xxix] ‘Shall we have Airplanes?’ Fortune, January 1948, quoted in M’Gehee (2010).

[xxx] Tyson (1992), p. 88.

[xxxi] Kerry Young ‘Federal Government Emerges as Top Health Buyer in New Analysis,’ Commonwealth Fund, 5th December 2016, available at (Accessed 13/07/2018).

[xxxii] Stephen Broadberry and Tim Leunig (2013) ‘The impact of Government policies on UK manufacturing since 1945. Future of Manufacturing Evidence Paper 2’, Foresight Government Office for Science, pp. 28-30, available at (Accessed 10/07/2018)

[xxxiii] Broadberry and Leunig (2018) op. cit.

[xxxiv] Broadberry and Leunig (2018) op. cit. Note that authors cite other experts who question the decisive role the scheme may have had. However, as the other factors they cite as perhaps being more important (‘Britain’s strong record in biomedical research at university level, the early introduction of efficacy regulation and the role of the NHS’) are all examples of public sector support or intervention, this does not detract from my argument.

[xxxv] Broadberry and Leunig (2018), p. 4.

[xxxvi] Ibid, p. 30.

[xxxvii] Erick Schonfeld ‘Silicon Valley Buzz: Apple Paid More Than $200 Million For Siri To Get Into Mobile Search,’ Techbuzz 28th April, 2010, available at (accessed 12/07/2018); Note that an argument can be made to justify this, as it was by Norman Winarsky of SRI in an interview in 2010. ‘When I put it to him that $150 million was a lot for taxpayers to spend on a technology that’s now been taken inside Apple, he corrected my premise on several counts, arguing that acquisitions are a natural outcome of SRI’s spinoff process. “I think the Bayh-Dole Act is one of the most brilliant acts in the history of Congress,” Winarsky says. “What you call ‘taking the technology inside’ has been responsible in large part for the creation of companies like Intel, Cisco, Apple, and Sun. The government would have had to pay billions of dollars, perhaps, to continue to advance this technology, while instead the commercial marketplace is making it available to everybody. Consumer revenue is what drives future products, rather than our taxes.”’ This argument still does not address the loss made by the state and, even assuming Apple went on to spend ‘billions’ developing SIRI, it has made billions selling it. Plus, it has invested its billions much later down the line when the state has turned the uncertainty into manageable risk. Wade Roush ‘The Story of Siri, from Birth at SRI to Acquisition by Apple—Virtual Personal Assistants Go Mobile,’ Xconomy 14th June 2010, available at

[xxxviii] Rosen (2013) op. cit.

[xxxix] (Shapiro 2012) cited in Mazzucato (2013 [2018]), p. 185.

[xl] John Battelle ‘The Birth of Google,’ Wired 8th January 2005, available at (accessed 11/07/2018)

[xli] See Joel Bakan (2004) ‘The Corporation. The Pathological Pursuit of Profit and Power,’ Chap. 2.

Bringing Politics to the Dinner Table

This will not take long. A few days ago, my eye was drawn to a piece in New Internationalist by Chris Saltmarsh and Harpreet Kaur Paul called ‘If we all became vegan tomorrow.’ It’s their response to a widely-cited article from the Guardian, ‘Avoiding meat and dairy is “single biggest way” to reduce your impact on Earth.’ Saltmarsh and Kaur Paul contend that this statement is a ‘myth’. In fact, the precise statement the NI article takes issue with was made by the leader of the research project around which the Guardian article is built:

“A vegan diet is probably the single biggest way to reduce your impact on planet Earth, not just greenhouse gases, but global acidification, eutrophication, land use and water use,” said Joseph Poore, at the University of Oxford, UK, who led the research. “It is far bigger than cutting down on your flights or buying an electric car,” he said, as these only cut greenhouse gas emissions.

How do Saltmarsh and Kaur Paul demonstrate this to be a myth? In short, they don’t. Instead, they rebut an argument that neither the Guardian piece nor Joseph Poore make. For instance, they write:

Climate change does not exist outside of our current social, economic, political and cultural systems. It magnifies existing patterns of inequity. Climate harms disproportionately affect groups and peoples already experiencing social, political and economic exclusion… Changing your shopping list – no matter how radically – will not solve these systemic problems. Thatcher said ‘there is no society’. Individualist ‘solutions’ to climate change – like prioritizing veganism – support this myth.

The points about the wider system are well-taken but neither the Guardian article nor the study it reports claim that individual veganism will ‘solve’ climate change. The claim is that a vegan diet is ‘the single biggest way to reduce your impact on planet Earth.’ It’s a separate claim.

In fact, Saltmarsh and Kaur Paul’s article contains much of value and makes many very sound points; not least that the problem of climate change and general environmental degradation cannot be adequately addressed through individual action alone. Yet, intentionally not, they blur two separate arguments and write a rebuttal to a point that has not been made. They go on:

The Guardian’s headline reports on the Oxford study by stating that ‘Avoiding meat and dairy is “single biggest way” to reduce your impact on Earth.” But we disagree. Although cutting out meat and dairy from your personal diet would have an important impact on reducing greenhouse gases, the facts suggest that there are bigger and far more effective ways to make a difference.

You have my attention, chaps. What are these ways?

…starting fossil fuel divestment campaigns and getting your employer, local authority and university to invest responsibly is one way. Organizing in your community for a cooperatively owned and operated municipal energy company to embrace renewables and eliminate fuel poverty. Becoming active in your trade union and developing policy supporting a just transition toward renewables. Making links with fossil fuel workers and getting them on side. Campaigning for banks like Barclays to stop providing corporate and project finance that enables further fossil fuel extraction. Joining the many front line resistances blockading new infrastructure like anti-frackers and resisting gas fields. Starting litigation or supporting those that have already brought challenges against complicit governments or companies.


I don’t want to be too harsh. Those are all creditable activities and, if successful, any one of them would have more impact on climate change than altering the diet of one person. Of course, they would. But let’s go back to the Poore’s statement: ‘A vegan diet is probably the single biggest way to reduce your impact on planet Earth…’ Your impact. Not your employer’s, not your community’s, not a bank’s.



Saltmarsh and Kaur Paul have written a useful article, filled with sensible points on several matters but not once do they support their central contention. Nowhere do they give an example of anything that the individual person can do to reduce their individual impact on climate destruction more effective than going vegan. Of course, veganism is not enough, and vegans should never claim that it is. Limiting one’s political activity to consumption choices and ‘lifestyle politics’ will never fix a fundamentally broken system. Anyone promoting veganism as the ‘solution’ to the environmental catastrophe we face is misguided. But widespread veganism would have a very significant effect on the problem and individual veganism is the most effective single thing one can do to reduce one’s personal footprint on the world.

Instead of addressing Joseph Poore’s actual claim, Saltmarsh and Kaur Paul knock down strawmen. I only hope that their misconceived article hasn’t turned anyone away from bringing politics to the dinner table.


Take the Money Out

In 2017, Theresa May’s Conservative Party spent eighteen and a half million pounds on its General Election campaign,[i] earning it the world record for the most expensive bullet ever retrieved from a woman’s foot. A few days later, Mrs May was forced to buy DUP support at a cost of £100m per MP, the most money spent trying to save a face since the preparations for Michael Jackson’s final tour. Political campaigns can be very expensive.

There are two widely acknowledged problems with political parties and money. The first is how they raise it and the second is how they spend it.

Raising it.

The first problem needs little elaboration. During the 2017 General Election, for instance, 83 people on the Times Rich List donated £12m to the big three parties, plus UKIP and the Greens. Unsurprisingly, £5.5m of the loot went to the Tories, with the LibDems taking £3.5m, and Labour £2.2m.[ii] Labour also raised money from affiliated trades unions, while all parties routinely collect smaller donations and membership fees. It’s donations that concerns us here.


Election 2017: Theresa May shares a joke with her campaign managers.

The taint of corruption around party finances has lingered long; with legislative remedies beginning with the Corrupt and Illegal Practices (Prevention) Act of 1883. Somewhat more recently, the Honours (Prevention of Abuses) Act of 1925 addressed the specific problem of parties selling titles for cash, although with only limited success, as the ‘Cash for Honours’ scandal of 2006 fulsomely demonstrated. In 2000, the Political Parties, Elections and Referendums Act created the Electoral Commission, which now regulates party finance and electoral probity.

That political donations should be kept free from graft and rascalism has few dissenters, but it is commonly accepted that parties should have to raise their own funds. Support for the alternative – full state funding – has long been poisoned by the personal avarice of discontinued MPs like Neil Hamilton, Stephen ‘Cab for Hire’ Byers, and Patrick Mercer whose greed seeps along the gutter of recent political history like saliva from an up-turned trough. There’s widespread cynicism about the democratic cost of such fundraising – what precisely it is that wealthy patrons are buying – but not yet enough to make improper spending of private largesse a greater popular evil than correct spending of the taxpayer’s hard-earned cash.[iii] More on state funding in a moment but it’s not happening any time soon.

Spending it.

Party_Spending_UKPGETo the second problem, then; what do parties spend all their treasure on? The Electoral Commission chart makes it clear. Of the nine categories included, only transport and administration aren’t some form of political communication or, to use the older word, propaganda (you can see all the categories by clicking here -ignore the split over s. 75 spending).

One account of the need for political communication, which lives on in the literature to this day, is the work of the celebrated American economist Anthony Downs. In the 1950s, he argued that parties need to build durable coalitions of support among voters who ‘invest’ in them with their votes. However, before buying stock in a party, one first needs to shop around by informing oneself of the competing parties’ positions on key issues. Inevitably, this self-education is time-consuming and, given that one’s own vote makes effectively zero difference to the result, is an ‘irrational’ expenditure of effort for no measurable gain. To counter this problem, political communication acts as a subsidy to the electorate: parties proactively advertise their wares; distilling policies into easily-understood offers that allow the punters to reduce the costs of ‘spending’ their vote on the best deal for them.  More on this in a moment.

Winners and losers.

That’s a formal answer to why parties need to raise so much money for communications but let’s ask another question.  Who, aside from advertising agencies, benefits from the parties’ need to spend so much money on campaigning? I see four consequences to arrangements as they are.

The first, obvious consequence is that smaller parties can only chirp while the Big Two screech.[iv] Parties spending heavily on their campaigns encourages the other parties to compete but if they can’t their presence in the campaign will likely be reduced. This leads to a diminished field of choice and the possibility of a cartel.[v] True, UKIP is a recent example of a small party that did project a  substantial national voice — including a coveted role for Nigel Farage as the Alan Davies of Question Time —  even despite the Party’s failure to trouble the House of Commons (other than by reanimating the political corpses of former Tory MPs).[vi] But its reach until the Brexit referendum was aided by a string of sugar daddies[vii] who allowed it to hate above its weight (and, I must concede, by a preternatural gift for melding decades of inchoate grievance into an unbending determination to vandalise the European flag with blood fresh from the nation’s wrists).[viii]

Secondly, the need to spend so much money on campaigning promotes centralised control within parties. To varying extents, local MPs are dependent on their party machine for access to its resources for canvassing, leafleting, local market research, and so on. In the 2015 General Election, the Tories conducted private polling of 80 target seats to provide local candidates with detailed information. Labour, borrowing from the Obama campaigns, constructed a new database and voter profiling system.[ix] Both parties maintained large teams of volunteers to deploy in key constituencies.[x] The threat of being robbed of all this muscle arguably serves as a strong informal way of disciplining MPs and candidates. In the 2017 General Election, so Alex Nunns suggests, the Labour bureaucracy may have in some cases allocated its social media, wide direct marketing, and targeted direct marketing spending at least partly with the intention of bolstering candidates with whom it had a ‘political affinity’ (i.e. being anti-Corbyn).[xi]

A third consequence of parties’ need for large amounts of money is that it allows capital, individual or corporate, leverage over politics. The investment theory of party competition formulated by the American academic Thomas Ferguson holds that parties can only adopt policies that enable them to attract the investment required to run successful campaigns.

“…it is a simple fact that virtually all the issues that both elites and ordinary Americans think about outside of or alongside campaigns – work and employment, free trade or protection, health care, the future of … production, the cities, taxes – are critically important not only to voters, but to well-organised investor blocs, businesses, and industries. And it is another simple fact that many such groups invest massively in candidates.”[xii]

Ferguson proposes an alternative to Downs’s model, arguing that, while voters cannot practicably invest the time required to properly acquaint themselves with all the issues that affect their interests, capital can. Much like business can give concerted attention to an issue in a way that community activists cannot (as I’ve discussed here, for instance), big business has the resources and focus to thoroughly evaluate candidates and parties to reward those who best serve their interests. This (along with slick lobbying operations and corporate PR generally) naturally influences the boundaries of political debate; promoting some issues and suppressing others.

Fourthly, I think Downs’s market analogy is naïve. It neglects in both cases, the true nature of the communication being deployed, which (to borrow from Jürgen Habermas) is not rational but strategic.[xiii] Strategic communication treats people not as ends but as means, either to shift product or to win office. In a word, it’s marketing. If you doubt this in either case, ask yourself how much of political or commercial communication is a simple attempt to explain the merits of a policy or product without recourse to evasion, emotional manipulation or plain deceit. As Aristotle would have noted, marketing runs long on ethos and pathos but very short on logos.[xiv] Negative campaigning, gimmicks, emotional manipulation, audience research, market segmentation, and targeted messaging impoverish genuine political debate and corrode collective critical faculties. We become a pack of dogs who salivate or snarl whenever the bell is rung (or a whistle blown).

There we have the problem as I see it. Parties prospering through donations is fundamentally undemocratic since, however stridently some protest to the contrary, voting is not the same as spending money. The vote is a token that levels the electorate because everyone gets just one. In the current system, wealth entrenches wealth as parties rent-seek from high-spending minorities while neglecting the stony majority.

You might argue that this doesn’t have too much effect since all parties are heard to some degree (especially under election broadcasting rules, which undoubtedly benefitted Jeremy Corbyn in 2017) and the diligent voter can discover whatever she needs to with a little research. But if the ability to deploy millions in advertising confers little advantage why bother? Some might also argue that individual donations reflect wider support in the country: the more support a party has the more donations they’ll receive and so the louder their voice becomes. But this is to put the electoral cart before the democratic horse. Democracy should not be about allowing popular messages to be heard more loudly but allowing all messages to be evaluated equally.

Keeping them honest.

The parties can be kept honest in two ways: by keeping an eye on how they raise money and a reign on how they spend it.

To take the first approach, the most widely-touted remedy is state funding. In fact, I used the phrase ‘full state funding’ earlier with good reason. Though not widely known, opposition parties in the Commons receive state funding for carrying out their parliamentary duties, research, formulating policy, travel expenses, and so forth. This ‘Short Money’ was introduced in 1975 and is paid to any opposition party with at least two MPs.[xv] There’s an involved formula for calculating payments but, in 2016-17, the Labour Party received just under £6.5m and the LibDems half a million. Somewhat farther from the buffet table, the Greens and UKIP received £216K a piece.[xvi] One might develop this into a formal system of state funding (encompassing the ruling party) and allocate funds for extra-Parliamentary activity. Naturally, as the current system rewards parliamentary incumbency, one would need to find an equitable way to extend it to parties without a seat.

There are several drawbacks to state funding. One is that a background level of mismanagement and expenses-rigging would occur but that is already the case. In fact, it might be preferable for the taxpayer simply to accept MPs’ more venal tastes but take advantage of state buying power. At least a heavily-discounted bulk purchase of Columbian cocaine would finally give Liam Fox a post-Brexit trade deal to crow about. And a similar Parliamentary five-year tender for prostitution services would surely be welcomed by upstanding members on both sides of the House.

Another drawback might be that, as in rentier states, state funding runs the risk of making the political class flabby and unresponsive. Relieved of the need to rattle a tin at donors big and small, parties will sit back and live on benefits. Worse, as the parties would be responsible for drawing-up the rules, this would only increase the tendency of the main parties to act as a cartel. I suspect, however, that both these objections would be firmly at the back of a public mind wholly dominated by one preeminent demurral: “Why should we give those fuckers a penny?”[xvii]

The other approach, spending controls, is also partly in use. Earlier this month, the Electoral Commission fined Leave.EU £70,000 and referred its CEO to the police for failing to report at least £77,000 of campaign spending.[xviii] In 2017, the Commission also fined the official Remain campaign and the LibDems for undeclared spending during the EU referendum campaign and the Conservatives £70,000 for breaches during the 2015 General Election and by-elections in 2016.

Nationally, registered parties may spend no more than £30,000 for each constituency they contest. This means parties fighting all 650 seats have their spending capped at £19.5m but that money doesn’t have to be spent equally on each constituency. Locally, individual candidates may spend an additional £10-16k in the 25 days before a General Election but, for a by-election, the limit is £100k.[xix] Fines are available for breaches of these laws and, if sufficiently grave, theoretically the local result can be voided and the election re-run. There is a defence, however, which applies if the election agent committed the offence without the ‘sanction or connivance’ of the candidate. Which, of course, will always be the case.

A different way?

The shared premise of controls on both how parties raise and spend campaign funds is that it is necessary and proper that parties spend money on campaigning. But what if one were to take a different view and argue that political campaigning — or, more precisely, the promotion of genuine, democratic debate — is too important to allow it to be contaminated with money? Formally at least, elections are supposed to be competitions between competing sets of policies, even visions for the way life should be. Money distorts this competition for three reasons. Firstly, if money confers advantage and is for any reason unevenly distributed then advantage will be unevenly distributed. Secondly, if the flow of money going into the parties reflects the existing distribution of power within society, then those with power will be privileged. Thirdly, and most fundamentally, using money to make one policy more prominent or appealing at the expense of another runs the risk of that other idea not being given its proper consideration. This is labouring an obvious point, I know, and also assumes that the campaign is about actual policy at all and not merely personality or ‘values’.

I’m not arguing that parties should be barred from issuing political communications. But would it not be better if a party’s voice, certainly at election times, had nothing to do with how much moolah it could muster? Instead, imagine if each party meeting certain eligibility criteria (fielding a minimum of candidates for instance) received an allocation of free communications but was barred from purchasing more. This allocation would include the printing and delivery to all homes of its manifesto, a certain number of party political broadcasts, access to hustings, money for events, and social and old media advertising. Locally, each candidate mustering over a certain threshold of signatures would get an equal allocation of materials and promotion. Personally, I would ban local and national polling in the month before a general election so that, rather than fine-tuning a message to get elected, parties would instead have to stake out a position and campaign for it.

Importantly, there would be no state interference with the content of the message, only parity of prevalence. Parliament would inevitably have to set the broad outline and intent of the policy, but the practical details and enforcement could be left to the Electoral Commission. The number of news media appearances — and hence the power of the corporate press to pick winners — would be harder to regulate but democratising the media is a separate issue.

This is a sketch rather than a solid, worked-out policy proposal but I think the idea would address some of the problems of the current system. Shorn of the need for campaign donations, parties would be far less in-hock to business, especially if capped state-funding for administrative costs were standardised. The debate between the larger and smaller parties would also be evened-up: a good thing, as how large a party is has no necessary connection with the merit of its position. If the means of campaigning were roughly the same, then the focus would have to be on the content of what each party was saying. It wouldn’t stop parties trying to bribe and mislead the electorate, but a campaign stripped of a lot of the flimflam might help the substantive issues come to the fore, leading to a more informed and responsible public.

Which is why it’ll never happen, of course.



[i] Labour spent £11m and the LibDems £6.8m. Peter Walker, “Tories spent £18.5m on election that cost them majority,” The Guardian 19th March 2018, available at

[ii] Alastair McCall , “Britain’s richest give £12m to parties fighting election,” The Times, 21st May 2017, available at

[iii] Author unknown, “Britain’s parties should be funded by the state,” Financial Times, 19th February 2015, available at

[iv] And the incumbent party will receive more coverage as both a party and as the Government.

[v] See for example, Katz, R.S., Mair, P., (1995), ‘Changing Models of Party Organisation and Party Democracy The Emergence of the Cartel Party’, Party Politics, Volume 1, pp. 5-28.

[vi] Douglas Carswell and Mark Reckless.

[vii] Anna Leach, “Meet UKIP’s 5 biggest donors”, The Mirror, 2nd January 2015,

[viii] And we may yet see a new small party punch above its parliamentary party if a new ‘centrist’ party, apparently equipped with £50m to ‘break the mould” of UK politics by giving a much-needed voice to marginalised and powerless multimillionaires like Simon Franks, ever emerges into the light (Michael Savage, “New centrist party gets £50m backing to ‘break mould’ of UK politics,” Guardian 8th April 2018, available at

[ix] Andrew Mullen (2015) “Political consultants, their strategies and the importation of new political communications techniques during the 2015 General Election,” in Daniel Jackson and Einar Thorsen (eds) “UK Election Analysis 2015: Media, Voters and the Campaign” The Centre for the Study of Journalism, Culture and Community, p. 42.

[x] Tim Ross (2015) “Why the Tories Won: The Inside Story of the 2015 Election,” Chapter Three. The Tories, for instance, spent hundreds of thousands on “Team 2015,” which borrowed psychological motivation techniques (including ‘chivvying teams’ of volunteers supplied by accountancy behemoth PriceWaterhouseCoopers) from the team behind London 2012. The team was 100,000 strong, with groups bused around the country to campaign in target constituencies in what its organiser, Grant Shapps, called a ‘ground war’.

[xi] Alex Nunns (2018) “The Candidate” (2nd ed.), OR Books (London), pp. 335-38.

[xii] Thomas Ferguson (1995) “Golden Rule. The Investment Theory of Party Competition and the Logic of Money-Driven Political Systems,” University of Chicago Press, p. 8.

[xiii] See Habermas’s “Theory of Communicative Action, Volume 1: Reason and the Rationalization of Society: Reason and the Rationalization of Society” and especially “The Theory of Communicative Action, Volume 2: A Critique of Functionalist Reason”. His “Structural Transformation of the Public Sphere” is also interesting.

[xiv] See Aristotle’s “Art of Rhetoric”.

[xv] Or if they have one MP but received more than 150,000 votes in the previous General Election.

[xvi] House of Commons Research Briefing, “Short Money” 19th December 2016 available at There’s a related allocation in the Lords, known as the Cranborne Money.

[xvii] There have probably been a few third ways proposed. In the US, for example, various experts have suggested systems in which each voter is given a voucher, which they can donate to the party of their choice. The voucher is then redeemable for cash but with enhanced safeguards attached. Seattle have been trying a variant of this for several years, which was touted to get ‘big money’ out of politics. I shan’t dwell on it here but as corporations are still able to make cash donations, the ‘big money’ isn’t getting any smaller.

[xviii] Electoral Commission, “Leave.EU fined for multiple breaches of electoral law following investigation” 11th May 2018, available at

[xix] FullFact, “Democratic deficit? The rules on election spending,” 10th May 2017, available at

No weapon that is formed against thee shall prosper

The Guardian ran an opinion column last week by its foreign correspondent, Peter Beaumont, about chemical weapons.[i] He opened by evoking the blood and misery of World War One before coming to his central question: ‘why is it that we regard the apparent use of chemical weapons by the Assad regime (which has claimed relatively few lives overall) as more terrible than the crude pummelling by conventional arms which have [sic] resulted in hundreds of thousands of Syrian deaths?’

It’s a worthwhile question but I was sincerely taken aback by the emaciated reasoning that followed. Before I come to that, first the disclaimer required for those of bad faith or worse intelligence. I do not approve of chemical weapons, I don’t support Bashar al Assad, and I offer no view on whether his forces were responsible for the Douma chemical attack or, indeed, whether it was a chemical attack at all.[ii]

It seems to me that Beaumont attempts to answer his chosen question at two points in his article. His first attempt is partly historical. The Hague convention of 1899 set out the humanitarian principles that would ‘later form the basis of the modern law of conflict.’ Among these was the section that limited the ‘right of belligerents to adopt means of injuring the enemy’. The first instance of this was the ban on poisoned weapons, itself building on a 1675 agreement between France and Germany that banned poisoned bullets.

But what of poisoned gas? This was singled-out because it ‘inspired a particular horror, in large part psychological.’ That it ‘has remained a special case is because of the way its prohibition has become emblematic of restrictions on warfare. We decided gas must not be used because of our horror of being gassed ourselves.’

Is this why we regard gas as more terrible than bullets, because we were its victims? When we become the victims of nuclear weapons will their possession also move beyond the pale? Were the thousands of Japanese adults and children incinerated in our twin fireballs of Hiroshima and Nagasaki not enough for us to forever renounce these most indiscriminate of means? Evidently not. They’ve not even been enough for states like the US and UK to take seriously their obligations, under the Nuclear Non-Proliferation Treaty, to make good faith efforts to eliminate them (and certainly not to develop more ‘useable’ nuclear weapons.[iii]). Indeed, the atom bombs are still defended as having helped ‘shorten the war’ – a defence Beaumont seems reluctant to allow Assad for his alleged use of chemical weapons. The salient different, of course, is that Assad is on the ‘other’ side. Being on the ‘other’ side forbids our enemies the right to make such decisions, to wage ‘just’ war or to self-defence at all.[iv]

In the Great War, we were supposedly horrified by chemical weapons but, as Beaumont mentions, not enough to forswear them ourselves. In 1919, Porton Down boffins in Wiltshire developed the ‘M Device’, an exploding shell containing diphenylaminechloroarsine. 50,000 ‘M Devices’ were shipped to Russia to be used in British bombing of Bolshevik soldiers. Though few were ever used, those caught in their green cloud reportedly vomited blood and then collapsed unconscious.[v]

Winston Churchill infamously did not understand the ‘squeamishness about the use of gas’ against ‘uncivilised tribes’ (he was speaking of India), noting that it was not ‘necessary to use only the most deadly gasses: gasses can be used which cause great inconvenience and would spread a lively terror and yet would leave no serious permanent effects on most of those affected.’[vi] Churchill’s defenders often assert that he was talking only of tear gas and not poison gas per se. Yet, this  distinction might seem a little academic when, as the War Office noted of one then common variant of tear gas in 1921, while it was ‘classified as non-lethal’ and was ‘far less noxious than even mustard gas,’ at the same time it might have ‘serious and permanent effects on the eyes, and even, under certain circumstances, cause death.’[vii] I’ll also note that, while the historian Ray Douglas has pored over the evidence for Britain actually using CW in Iraq and found it wanting, he most certainly acknowledges that our lack of use arose from ‘practical difficulties rather than moral qualms’. Even in the oft-cited passage above, Churchill did not appear to think it wrong to use the ‘most deadly gasses,’ merely that it was not ‘necessary’. There’s no ‘horror’ there, simply a candid acceptance of chemical weapons as another tool in the white supremacist’s armoury. For some, in fact, chemicals were perhaps even a better weapon since their effects were ‘less terrifying’ than artillery shells or flamethrowers. Indeed, Douglas quotes a General Staff memorandum from 1919, which mused: ‘if it is advisable and possible to abolish gas on purely humanitarian grounds, the abolition of High Explosive, a far more terrible weapon which removes limbs, shatters bones, produces ‘nerves,’ and generates madness, is equally advisable.’[viii]

There may well be a public revulsion to chemical weapons but evidence of the same within elites seems thin. It certainly wasn’t suggested by British and American support for Saddam Hussein’s gassing of the Iranians – for which the US provided logistical support[ix] – or of Halabja, for which the US provided diplomatic cover[x] and the UK rewarded with £340m of additional economic support.[xi] To take just one more example, in 2006, a Ministry of Defence Inquiry reported that scientists at Porton Down had exposed 11,000 people to mustard and nerve gas in experiments carried out between 1939 and 1989; experiments which claimed the life of one serviceman and inflicted lasting damage on many more.[xii]

Beaumont then deploys his perfunctory second argument:

‘The argument that relies on the idea that other weapons are equally deadly misses the point, which is that we have decided that this class of killing – like the wanton murder of civilians and shooting prisoners – is beyond the pale.’

Is this really the point? That chemical weapons are uniquely horrific because ‘we’ have decided that they are? This is to invoke that old parental standby, ‘because I said so’. The argument betrays a certain western bias and the usual reek of hypocrisy. I can well imagine that other parts of the world might think we ‘miss the point’ that much of our arsenal is equally, if not more, reprehensible. We clutch our scented handkerchief to our nose at the whiff of chemical weapons while our depleted uranium leaves ‘babies with two heads. Or missing eyes, hands and legs. Or stomachs and brains inside out.’[xiii] Our white phosphorous burns people to their bones,[xiv] we perforate limbs to unstitchable mush with Dense Inert Metal Explosives,[xv] and rupture people’s internal organs or burn them to death while showing off the Mother of All Bombs, which might also be said to inspire a ‘particular horror, in large part psychological.’[xvi] Beaumont’s ‘fitful advances in the laws of war – contradictory and permissive as they remain’ seem all too ‘optional and reversible’.

So why do we pillory chemical weapons, which are revolting but not uniquely so? Perhaps it is because they, unlike our latest glittering engines of fully-automated luxury death, are not beyond the pocket of the Lesser Nations. To quote the Iranian politician Hashemi Rafsanjani, they’re ‘the poor man’s atomic bomb’.[xvii]As such, the taboo on their use is not only prophylactic but also a useful moral lever to justify our enlightened intervention.

[i] Peter Beaumont, “The taboo on chemical weapons has lasted a century – it must be preserved,” The Guardian, 18th April 2018, available at

[ii] Robert Fisk, “The search for truth in the rubble of Douma – and one doctor’s doubts over the chemical attack,” The Independent, 17th April 2018, available at

[iii] Most recently, see Clark Mindock “Trump administration considering developing two more ‘usable’ nuclear weapons,” The Independent, 16th January 2018, available at Note that such intentions are portrayed as a response to Russian behaviour but as Charles Ferguson of the Centre for Non-Proliferation notes, the US has been ‘downplaying and, in key instances, repudiating arms control agreements’ since at least 2002 (see Nuclear Threat Initiative, “Nuclear Posture Review” 1st August 2002, available at )

[iv][iv] According to one BBC Radio Four new report I heard, Trump ‘warned’ of his recent attack on Syria while Russia ‘threatened’ to respond.

[v] Giles Milton, “Winston Churchill’s shocking use of chemical weapons,” Guardian, 1st September 2013, available at

[vi] J. A. Webster, Air Ministry, to J. E. Shuckburgh, Colonial Office, September 15th, 1921, PRO, CO 537/825, quoted in R. M. Douglas, “Did Britain Use Chemical Weapons in Mandatory Iraq?” The Journal of Modern History, Vol 81, No. 4 (December 2009), pp. 859-887. Italics mine.

[vii] Webster, op. cit. Note that the effects of exposure to mustard gas include blistering, blindness of up to ten days or in some cases for good, severe abdominal pain, shortness of breath, nausea, vomiting, chronic respiratory disease, cancer, and death.

[viii] Webster, op. cit.

[ix] Patrick E. Tyler, “Officers Say U.S. Aided Iraq in War Despite Use Of Gas, New York Times, 18th August, 2002, available at  Shane Harris and Matthew M. Aid,  “Exclusive: CIA Files Prove America Helped Saddam as He Gassed Iran,” Foreign Policy 26th August 2016, available at

[x] Prof. Juan Cole, “US Protected Iraq at UN from Iranian Charges of Chemical Weapons Use,” Informed Comment, 28th August, 2013, available at Robert Fisk reported that ‘the CIA – in the immediate aftermath of the Iraqi war crimes against Halabja – told US diplomats in the Middle East to claim that the gas used on the Kurds was dropped by the Iranians rather than the Iraqis (Saddam still being at the time our favourite ally rather than our favourite war criminal).’ (Robert Fisk,  “This was a guilty verdict on America as well,” The Independent, 6th November 2006, available at

[xi] A month after Halabja, the UK Government extended a further £340m in export credit guarantees to Saddam Hussein (John Kampfner (2003) “Blair’s Wars” Free Press, London, p. 7. See also Alex Danchev, Dan Keohane (eds.) (1994) “International Perspectives on the Gulf Conflict, 1990-91,” Palgrave Macmillan, London p. 148.

[xii] Rob Evans, “Porton Down chemical weapons tests unethical, says report,” Guardian, 15th July 2006, available at

[xiii] As Barbara Koppel wrote in 2016, “what is little known is that for the past 25 years the United States and its allies have routinely used radioactive weapons in battle, in the form of warheads and explosives made with depleted, undepleted or slightly enriched uranium. While the Department of Defense (DOD) calls these weapons “conventional” (non-nuclear), they are radioactive and chemically toxic. In Iraq, where the United States and its partners waged two wars, toxic waste covers the country and poisons the people.” Barbara Koppel, “How the U.S. Made Dropping Radioactive Bombs Routine,” Newsweek, 4th April 2016, available at For detail on the US use of DU in Syria, see Samuel Oakford “The United States Used Depleted Uranium in Syria,” Foreign Policy 14th February 2017, available

[xiv] See George Monbiot, “Behind the phosphorus clouds are war crimes within war crimes,” Guardian 22nd November, 2005,

[xv] DIME weapons were developed by the US and use a fine powder of tungsten or carbon fibre to confine the blast to a small area, perforating flesh and bone. Allegedly have also been used by Israel in its colonisation of Palestine. See Raymond Whittaker, “’Tungsten bombs’ leave Israel’s victims with mystery wounds,” The Independent 18th January 2009, available at  According to a report commissioned for the International Committee of the Red Cross in 2016, there are ‘concerns that wounds from DIME weapons are particularly difficult to treat surgically, and may have ongoing health impacts’ (Cross, Kenneth, Ove Dullum, Marc Garlasco & N.R. Jenzen-Jones. 2015. Explosive Weapons in Populated Areas: technical considerations relevant to their use and effects. Special Report. Perth: Armament Research Services (ARES), available at )

[xvi] Thermobaric weapons like the MOAB (Massive Ordnance Air Blast) were developed by the US Government and used in Vietnam as well as being used by the Russians in Chechnya. Human Rights Watch quote a 1993 Defence Intelligence Agency Report on the Russian bombs (although the effects don’t differ with whichever flag is painted on the casing): ‘The [blast] kill mechanism against living targets is unique–and unpleasant…. What kills is the pressure wave, and more importantly, the subsequent rarefaction [vacuum], which ruptures the lungs…. If the fuel deflagrates but does not detonate, victims will be severely burned and will probably also inhale the burning fuel. Since the most common FAE fuels, ethylene oxide and propylene oxide, are highly toxic, undetonated FAE should prove as lethal to personnel caught within the cloud as most chemical agents.’ Human Rights Watch (2000) “Backgrounder on Russian Fuel Air Explosives (“Vacuum Bombs”),” available at One Pentagon report into the MOAB used typically anodyne language: ‘It is expected that the weapon will have a substantial psychological effect on those who witness its use.’ Robin Wright, ‘Trump Drops the Mother of All Bombs on Afghanistan,’ The New Yorker, 14th April, 2017, available at

[xvii] ‘While nuclear weapons represent the zenith of mass destruction, their fabrication requires advanced industrial capabilities as well as access to rare, tightly controlled materials. Chemical and biological weapons, on the other hand, are cheap and easy to build using equipment and materials that are used extensively for a host of civilian purposes.’  Lord Lyell “Chemical and Biological Weapons: The Poor Man’s Bomb Draft General Report,” North Atlantic Assembly International Secretariat 4 October 1996 Draft, available at


Life Versus Liberty

There’s been another mass shooting in an American school. Well, I’ve not checked Twitter for ten minutes but I’ll assume there has been.

The massacre in Parkland Florida last Wednesday may have killed 17 and injured 15 but we should stay upbeat: the five mass shootings since then[1] killed only six and injured 19 between them. No wonder Al Qaeda had to fly planes into skyscrapers in 2001; in the US atrocity is a crowded market.

In the face of this, Congress is paralysed by a deep sense of frustration. There is no obvious tax cut for corporations that will address the problem, bombing would be too costly, and victimising Muslims – while satisfying – is only indirectly effective. In the absence of a workable programme of appearing to do something, the only options left are too effective to contemplate.

The public have of course been praying; expressing their faith that, in a country where the weekly school shooting is timetabled in with the grim inevitability of double games on a Friday morning, some god or other will finally notice the hashtag and decide that enough is enough. One can only hope that it’s not Jesus, who previously tried to drown humanity in a fit of rage and rejection lacking only a trench coat and his Father’s AR-15.

JesusGunFor the gun-owning minority, the principal answer to the problem of mass shooting is not fewer but more guns. The problem, they say, is not children with guns but children without them. Were the US to properly support a policy of No Child Left Unarmed, then we could trust to the inherent wisdom, judgement, and restraint of teenagers. Indeed, the problem could become self-regulating with little need to for authorities to intervene.  If teachers and students all carried guns and there were more metal detectors, armed security patrols, and bullet-proof screens then schools would not only become safer but, almost indistinguishable from adult prisons, would provide useful orientation to those black kids who went on to reach adulthood.

I guess this reasoning proves the old adage that, when the only tool you have is a hammer, every problem looks like the bullet-peppered corpse of a child. For those softer-hearted folk who don’t want to see schools turned into fortresses, it’s hard to think of a way of protecting children that might find favour with the American right: compulsory home-schooling perhaps, or a change in zoning laws to move schools from ‘Residential’ to ‘Womb.’

Naturally, the NRA, which is funded by the gun industry, wants to see more people carrying guns, just as tobacco companies want to see more people smoking. But the NRA also serves to draw much of the wider public’s rage on to it and away from gun manufacturers. I imagine that America’s target shooters, survivalists, recreational sadists, and Birthers are delighted to be the industry’s flak jacket when one of their number flicks off his safety for the final time.

The obvious solution to gun violence is to restrict or eliminate private ownership of guns but this runs into the customary objections. The US Constitution is a sacred, inviolable, and immutable document handed down by God to the Founding Fathers and the right to bear arms is one the most cherished amendments to this sacred, inviolable, and immutable document handed down by God. Gun owners will also accurately point out that ‘guns don’t kill people, people kill people’ and that these vital tools of self-defence confer no real advantage. Without them, perpetrators would only use knives, cars, or perhaps their teeth. We should be thankful that guns have so far saved us from an epidemic of mass bitings or the black farce of an angry young man trying to negotiate his parents’ SUV down narrow school corridors in search of the girls who laughed at his penis.

Guns, so the reasoning goes, are just a tool like any other. Yes, they can be used to kill people but they are also used every day for a range of purposes such as injuring people, damaging property, protecting people from deer and rabbits, and facilitating unorthodox banking and retail transactions. That they might confer some marginal tactical advantage over unsuspecting children sitting in classrooms is strictly true but then so would any weapon. One wonders, really, why humans bothered to invent such a patently inconsequential toy as the handgun in the first place. Also, gun advocates claim, banning guns won’t stop professional criminals from obtaining them. This is true but one wonders how many  professional criminals would shoot you because you didn’t like their poem.

It is also true that there is higher gun ownership in some other countries where mass shootings are far, far lower. So, the presence of guns alone isn’t the whole of the problem. Maybe there is some issue with the American psyche that needs to be addressed, something that would explain their tendency to shoot not only each other but the rest of us as well. What does lead otherwise sane members of the public to shoot up their classmates or kill in petty disputes over parking places, romantic rejection or crude oil deposits?

Here, then, the American reputation for practicality over ideology should come into play. They need to decide which is the quicker fix: a centuries-long thoroughgoing and fundamental realignment of American cultural, spiritual, and economic values to remove major sources of anger and alienation, recast the conduct of interpersonal relationships, neutralise toxic masculinity, and thereby engineer an epochal remodelling of human nature OR ban guns, which might take years. There are no easy answers.

Still, I should try to end with something positive. Statistically speaking, kids are still more likely to die from obesity than from being shot and fat kids, while slower at fleeing down corridors, are also less adept at climbing on rooftops with heavy ammunition. And widespread gun ownership means more US medallists on the podium for Olympic shooting events – even if they do look surprised to see an American flag flying at full mast.

Sleep tight, little ones.



[1] Oklahoma City (16/2), Keego Harbour (16/2), Memphis (17/2), Kansas City (17/2), San Antonio (18/2); data courtesy of Mass Shooting Tracker (accessed 19/02/2018).

Blighted are the Shelf-Makers

I’m old enough to remember video tape with affection. My family acquired its first video cassette recorder around 1982 when the novelty was still vivid. It was VHS, front-loading, the size of a small family hatchback and by modern standards almost Heath-Robinson in it brute mechanical beauty.

It’s hard to explain why I feel such nostalgia for what objectively was a clumsy cacophony of rubber, metal, and plastic but I do. For the six year old me, it was irresistibly encrusted with buttons, knobs, sliders, and dials. The customary shopfront of its principal controls – play, rewind, and pause – were pleasure enough but other treasures were hidden beneath a hinged flap on the lower front and a detachable panel inset in the top. Beneath these glittered the more exotic controls like ‘tracking’ and ‘input,’ eight mechanical  tuning dials and the ‘AFT’ button.[i] When Channel Four was born in November 1982, my dad had to get on his hands and knees, pop the top panel and seek out primordial Countdown through the crashing surf of static.

I can remember pressing the Standby button, opening the door and seeing tantalising glimpses of the illuminated heads, capstans, and spindles within. I can hear in my head, as clearly as you can remember your favourite song, the refrain of its mechanism as I pressed a tape into the front door and watched as it was drawn inside the beast. Sometimes it was a video mechanism, other times its was the landing bay door of a secret base.

It even had a ‘remote’ control: play, pause, fwd, rew, and rec attached via a 3ft cable that plugged in at the back and, once passed over the machine, afforded one the luxury of operating the machine from about 18 inches away. I’m even fond of the problems that afflicted its dotage (and my teenage years when it became mine alone) – the way it would sometimes crimp the edge of the tape, irreparably knackering the sound on some of my favourite tapes.

These were the days when I had a library of blank cassettes, some labelled (most not) and packed with recordings of Doctor Who and Star Trek: The Next Generation. The E120, the workhorse E180, the mighty E240s. The Scotch ‘lifetime guarantee’ fronted by an amiable skeleton. The etheric and unrepeatable[ii] magic of TV, captured and tamed in a shiny box like a ghost trapped by Venkman, Stantz, Spengler, and Zeddemore.

I remember, into the 90s, the archaeological pleasure of watching old tapes, especially those borrowed from friends, through to the end. The first recording would finish, there’d be a wash of static, and then the fag end of the recording beneath would slide into view. Then another and another. I’d often watch tapes through right to the point when they’d click off and rewind. One minute, you’re watching ITV’s bowdlerised 90s cut of Heartbreak Ridge (complete with the minced oath, ‘maggot farmer’), then you’re transported into the technicolour fantasy of an 80s ad for Kellogg’s Fruit n Fibre (with one with Ross Kemp) or those weird 80s Weetabix commercials in which booted and braced skinhead biscuits of wheat would intimidate other cereals (and we accepted this as normal).

At the weekends, I was allowed to accompany my dad to the Six Hills Video Shop and choose a title from the seemingly enormous array of display cases that bejewelled its walls. Only from the Us and PGs, of course, although I was obviously far more enticed by the 15s and 18s, which all had far more exciting and stimulating covers (especially some on the top shelf in one corner) and were alluring because they were forbidden.

It’s all gone now. Funai Electric manufactured the last video recorder in July, 2016. While there is a small but enthusiastic market for old video tapes, particularly the more obscure horror movies, I doubt there’ll ever be a ferrous oxide resurgence to mimic that of vinyl. Yet, our language is an analogue recording of history. I still hear people talk about ‘taping’ and ‘rewinding’ and we’ll still be discussing the medium of film long after celluloid takes its place next to wax cylinders and daguerreotypes. One day film will exist only in films.

The big selling point of video recorders was convenience and, notably, control. Watch what you want to watch, when you want to watch it. Don’t be a slave to those damned TV channels but the master of your own viewing pleasures. As Troy McClure said to Delores Montenegro (in ‘Calling All Quakers’) ‘have it your way, baby.’

Fast forward thirty years and we’re now in another revolution of convenience and ‘control.’ The age of the DVD and the brief blu dawn are coming to an end and now we are dipping our toes in the Great Stream. We now watch even more of what we want to watch, when we want to watch, and without a chilly walk to the video shop or the need to endure the crunching, chattering rabble at the local flicks. We watch, listen, chat, and shop online. But how much of the new control is real?

It’s easy to focus on the petty irritations of the digital world. Netflix’s co-founder recently

adric 4

Silent credits attend the death of Adric in the 1982 Doctor Who story, Earthshock

declared their aspiration that one day it would ‘get so good at suggestions that we’re able to show you exactly the right film or TV show for your mood when you turn on Netflix.’[iii] But what if I aspire to read the credits uninterrupted? What if I think that the programme makers might sometimes use the credits for dramatic effect? Instead Netflix, like an overeager waiter, whips away the programme and algorithmically catapults me toward the next course. It’s not wholly new, of course; even on terrestrial TV credits have been squeezed for years by the cajolery of continuity chatterers. But it’s still annoying.

Trailers have always been part of home media. They were there in the VHS days but at least fast forwardable. Nevertheless, imagine visiting a Blockbuster and having a doorman compel you to watch one before you even reached the shelves. This is now what the Android Amazon Video app does at least once per day. Yes, one can stop it once it has started but one cannot stop it from starting. At least at the cinema people can use the adverts and even the trailers to have a conversation, check their phone, or return to the foyer to secure yet more food. Much as they do with the eventual film.

We tolerate behaviour online that we would likely never put up with in person and here I’m not discussing the hourly scorching belligerence of ‘social’ media so well summed up in this video. I mean the behaviour of companies online. Imagine for instance that, near the end of your weekly shop, a store assistant blocked your path and wouldn’t let you get to the checkout until you’d accepted or rejected a list of items in which she thought you might be interested. I think most people would find that hectoring and coercive yet it’s precisely what one has to accept in order to shop online with Sainsbury.

Worse still, imagine the indignity, the sense of violation you would feel if someone broke into your house and stole your CDs. Imagine then instead how much worse you’d feel, how much more soiled, degraded and sullied, if instead of perpetrating such a theft – or merely having a shit on your couch – they left you an album by U2.

Speaking of music, some of you who’ve used the Amazon Music Player might have noticed that it has a subsidiary function, carefully hidden, of allowing you to actually play the music you’ve purchased. Its core function, of course,  is to pelt you with inducements to buy more music, preferably via a subscription. This is quite reasonable since, putting chummy marketing aside, Amazon’s sole objective is to persuade you to take money out of your account and put it in theirs. The product itself  is a mechanism for selling you more products (again, not new but accelerated online). Helping you to actually listen to your music is very much a secondary concern in what should really be called the Amazon Music Seller. Apps are less like faithful servants and more like pestering children.

“I’ve come up with a set of rules that describe our reactions to technologies:

1) Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

2) Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

3) Anything invented after you’re thirty-five is against the natural order of things.”

Douglas Adams –The Salmon of Doubt.


Lest I simply sound like a grumpy old man adumbrating a litany of my peeves, let me make clear that there’s a political edge to my grousing; namely increased control masquerading as choice. The range of baubles for us to play with has increased but the price is that our leisure time – socialisation, entertainment, education and consumption – occurs conveniently on something else’s property. We’re shopping, playing, watching, chatting and searching by their rules. We’re steered where they allow us to go, finding what they want us to find, knowing what they want us to know. Our physical space has already been colonised – what isn’t owned by government is owned by private capital, public town squares have already become private malls. Now cyberspace is heading the same way (and with a massive in-built head start). Sound overblown and conspiratorial? Perhaps today -but tomorrow?

One of the great sleights of hand in recent years, for instance, has been the promotion of ‘the cloud’ – with all the connotations of ownerless neutrality this inspired piece of thought-steering conjures. After all, nobody owns a cloud; it must just float above us like some beneficent 21st century commons. In fact, the cloud is a network of servers belonging to commercial companies ranging from relatively modest independents to the GAFA behemoths of Google, Amazon, Facebook, and Apple. Of course, invitations to store one’s data in ‘the cloud’ sound much more  benign that ‘on our servers.’

Well, OK, storing one’s property on someone else’s turf isn’t necessarily a one way ticket to Oceania, is it? After all, people dump their shit in Big Yellow Storage all the time without having to affirm that they love Big Brother. Except that it’s no longer your property. No, that film you bought last night from Amazon isn’t yours. In fact, you’ve merely leased it for an indefinite period. Now, you might argue that it was never really yours before. The contents of DVDs, books, CDs, and VHS were all copyrighted – yours to own but subject to strict conditions – so what’s really changed? Well, check Amazon’s T&Cs – they can remove your purchase at any time. Unless you download it to your own storage, you don’t have the unconditional possession that you had over an Amaray-enclosed disc. You’re not purchasing anymore. You’re renting -on a very long term, granted – but you’re renting. Soon, there’ll be no more borrowing a DVD or a book from a friend and you won’t be taking yours  down your favourite charity shop when you’re done, either. Like the message, the medium is now theirs. Your shelves of DVDs, CDs, and books  will evaporate into a cloud library hosted (held) within someone else’s property. One day, all visitors will have to judge you by will be some misguided ornaments and your personality.

And the capacity to monitor our viewing habits has also increased. The obvious concomitant of Netflix being able to suggest what we might want to watch is that it knows what we have watched. For most people this is no real practical concern but it’s another piece of infrastructure for a surveillance state, another category of data to add to all the others potentially allowing for a detailed picture of us to be constructed and – ask any lecturer wanting to talk about Brexit – some people are just itching to know as much about us as possible. The next time you binge-watch The Handmaid’s Tale remember that you might be munching Doritos in the prologue.

And what happens when Amazon goes bust? Where will your prized collection go when the company no longer exists? True, other companies might buy out the rights and the infrastructure but they don’t have to and won’t if they don’t think there’s money in it for them. Amazon use a proprietary format for Kindle, for example, so there’s no guarantee you’d be left with anything other than what’s stored on your hardware. And when that dies?

Video tapes, CDs, and even books are standards based. So long as your equipment complies with those standards you can read the content. A CD manufactured to the Red Book standard should play on any CD player. Region codes aside, a DVD of The Force Awakens will play on any machine. The latest Dan Brown novel is accessible to anyone who can read, although obviously appreciated to its fullest extent by those who cannot. Streaming and download services rely heavily on proprietary file formats to ensure that material isn’t shareable. There are presently exceptions but how long will they last? Look at the stranglehold (now slipping) that Microsoft has had on word-processing by making sure its file .doc and .docx formats are as opaque as possible.

Digital content such as films, audio files and eBooks are effectively software with all the (potential for) control and restriction that implies. The apps on a smart TV can be withdrawn during forced ‘upgrades’ when licensing deals expire. So, that £700 set you bought with iPlayer and YouTube built in could be without both one day and there won’t be anything you can do except buy a new TV. And this isn’t a hypothetical -it already happens. Let’s not be in any doubt what this is – the company from whom you think you’ve bought something has taken it back from you. Of course, this may be because of genuinely unavoidable incompatibility but it’s hard to believe that this isn’t also another mechanism for enforced functional obsolescence.

Holodeck-800x420There’s no easy answer to this. The technology isn’t inherently wrong but it is massively corruptible. Nor is it going to go away: people will always be lulled by convenience. Alternatives to digital online consumption as part of our increasingly shut-in economy will wither unless we take positive action to keep them alive. They’ll be seen as troublesome, archaic eccentricities, like wanting to travel around New York without a car or live near an A&E.  Being offline and off social media will never be forbidden, merely absurdly inconvenient. You’ll always be allowed to walk off the holodeck but why would you want to when beyond lies only isolation, and dark, dark silence?




[i] ‘Automatic Fine Tuning.’

[ii] Well, repeated a lot less in those days.

[iii] Unknown author, Streaming on screens near you. Can Netflix stay atop the new, broadband-based television ecosystem it helped create?’ The Economist

Costly Delusions

Last Friday’s failed ‘bucket bomb’ has produced yet more one-eyed coverage of Islamic terrorism and roiled the cauldron of social media. Islam, the crazed 7th Century death cult bent on universal domination,™ has struck again. Now, I carry no more brief for the fairy tales of Mohammed than I do for those of the followers of the Carpenter of Nazareth. Nevertheless, I don’t accept the general charge that Islam is a religion evil above all others. Nor, despite my own atheism, can I join wholeheartedly in the savaging of Islam by ministers of the ‘new atheism’ – such as Sam Harris – who appear to have given up worshipping every god save the Holy American Empire. I also reject the widespread charge, expressed by David Cameron a few years ago, that ‘Isis is a greater and deeper threat to our security than we have known before.’[1] Certainly, I repudiate the accusation that Islam by itself is a sufficient condition to give rise to terrorism.

Simple arithmetic ought to be enough to illustrate the point. The Global Terrorism Database compiled by the  National Consortium for the Study of Terrorism and Responses to Terrorism (START) offers itself as the most comprehensive non-classified database of terrorist attacks in the world. It holds details of approximately 170,000 terrorist attacks carried out globally between 1970 and 2016 by all affiliations and creeds (excluding states but that’s a different discussion). During the same period the global Muslim population increased from approximately 700 million to 1.8 billion.[2] I don’t have the demographic skills or inclination to estimate how many unique Muslims have been alive for each year of that period but to round to 1bn seems a reasonable approximation. Let’s assume – wrongly – that each one of those 170,000 terrorist attacks was carried out by a different Muslim, so there have been at 170,000 Muslim terrorists. Dividing those fictional 170,000 Muslim terrorists into our one billion Muslims would mean they comprised just 0.00017% of all Muslims. Put another way, about one in every 5883 Muslims would have committed a terrorist attack. Of course, this calculation wildly exaggerates the number of Islamic terrorists in the world but, even after so doing, the idea that Islam itself causes terrorism is revealed as absurd. If Islam causes terrorism why hasn’t  it turned the other 999,830,000 Muslims into terrorists as well?

Deaths by terrorism in Europe

According to Europol, there were 142 failed, foiled, or completed terror attacks reported in the EU 2016 (in six states). This was down from 211 in 2015 and 226 in 2014. Of those 142 attacks in 2016 99 were carried out by ethno-nationalist and separatist groups. Left-wing extremists carried out 27 attacks, there was one right-wing attack, and two could not be attributed. This means that just 13 were carried out by jihadists (six of which were attributed to Islamic State).[3] These 13 attacks were also the only attacks with a religious motive -90% were secular. It is true that Islamist attacks caused most of the casualties that year but it is still the case that less than 10% of terrorist attacks in the EU in 2016 were carried out by Islamists. This assessment also generalises for previous years – the majority of terrorist attacks have been carried out by ethno-nationalist groups and not by adherents of any religion.[4] On these figures, Islam – and religion generally – are a very poor predictor of terrorism. Perhaps a better predictor of Islamic terrorist attacks in Europe can be deduced from the graph above.

Deaths from terrorism in the US

The most recent whole year figure for terrorist attacks in the US is for 2015 and is calculated by START.[5] There were 61 attacks in the US during that year of which nine (or just under 15%) were committed by Islamic extremists. Another study in 2016 looked at 201 terrorist incidents recorded since 2008, finding that while 63 incidents involved perpetrators ‘espousing a theocratic ideology’ 115 incidents were down to right-wing extremists. In other words, right-wing extremists were behind nearly twice as many terrorist incidents as were associated with Islamists. The Islamists caused 90 deaths while the right-wing extremists killed 79.[6]

To put these deaths in perspective, in 2015 91 Americans died in accidents involving lawnmowers.[7]  In the same year 44,193 killed themselves.[8] Between 2005 and 2015 the number of Americans killed by gun violence was 301,797.[9] Excluding disease, it is Americans who constitute by far the greatest threat to Americans.

There are, of course, hotspots elsewhere in the globe where nearly every terrorist attack is carried out by a Muslim. Perhaps not coincidentally, these often are places like Afghanistan and Iraq – made warzones by the US and UK – where they are fighting occupation.

Well, all suicide bombers are Muslims, aren’t they?

Again, no. In fact, between 1980 and 2004, the world leader in suicide attacks was the Tamil Tigers, a secular Hindu group. Moreover, at least a third of the suicide attacks in predominantly Muslim countries were carried out by secular groups, such as the Kurdistan Workers Party (PKK) in Turkey.[10] The leading authorities in this field, Robert Pape and James K Feldman, studied every one of the 2178 reported suicide attack between 1980 and 2009. They find that,

“Islamic fundamentalism cannot account for the steep upward trajectory of the annual rates of suicide terrorism— from an average of three attacks per year in the 1980s to over 500 in 2007—since it is implausible… that the number of Islamic fundamentalists around the globe rose by a similar astronomical rate (over 16,000%). Further, the geographic concentration also casts doubt on the causal force of Islamic fundamentalism. If religious fanaticism or any ideology was driving the threat, we would expect a spread of more or less proportionately scattered attacks around the globe or, in the case of Islamic fundamentalism, at least spread randomly across the 1.4 billion Muslims who live in nearly every country in the world. However, we are observing nearly the opposite of random, scattered attacks that would fit the pattern of a “global jihad,” but instead tightly focused campaigns of suicide terrorism that are limited in space and time and so would appear related to specific circumstances.”[11]

Pape and Feldman also note that Islam cannot explain why important suicide terrorist campaigns in recent years have ended. For example, since Israeli combat forces left Lebanon in 2000 there had not been a single Lebanese suicide terrorist attack by the time Pape and Feldman published in 2010; not evening during Hezbollah’s war with Israel in 2006. Yet Hezbollah remained an Islamic fundamentalist group throughout that decade.[12] The bottom line, as they put it, is that it is military occupation, not Islam, that drives suicide bombing.

Well, even if Muslims aren’t all terrorists, they certainly all support terrorists, don’t they?


Some of the most detailed and reliable work on opinion polling is done by the US-based Pew Research Centre. They found in 2013 that ‘Muslims around the world strongly reject violence in the name of Islam.’ Roughly 75% of Muslims reject suicide bombing and other forms of violence against civilians. And in most countries the prevailing view is that such acts are never justified as a means of defending Islam from its enemies.[13]

In the US, a 2011 survey found that 86% of Muslims say such tactics are rarely or never justified. An additional 7% say suicide bombings are sometimes justified and just 1% say they are often justified.[14] A 2009 study by the Network of public opinion in predominantly Muslim countries reported that ‘very large majorities continue to renounce the use of attacks on civilians as a means of pursuing political goals’. This was despite respondents supporting the goal of groups like al Qaeda to expel US forces from all Muslim countries and approving of attacks on US troops in Muslim countries.[15] Of course, there are Muslims with reprehensible views and there is stronger support in some countries for terrorism including against civilians (40% in Palestine and 39% in Afghanistan according to the Pew study) but several Muslim nations have been under western attack for decades. A hardening of attitudes should be expected. What matters is that being of the Islamic faith is not, by itself, a reliable predictor of attitudes to – or participation in – terrorist acts. So long as we continue to delude ourselves as to the complexity of the reasons behind terrorism, we are throwing more bodies on the pyre.




[1] David Cameron  “Threat level from international terrorism raised: PM press statement,” 29th August 2014, available at

[2] To derive this figure, I have taken two estimates from H. Kettani, “World Muslim Population: 1950 – 2020,” International Journal of Environmental Science and Development (IJESD), Vol. 1, No. 2, June 2010 (;jsessionid=300C9E31537245BA23E3D381C6B7C642?doi= )and

[3] Europol “EU Terrorism Situation and Trend Report 2017”, pp. 11 & 49. The report notes that completely accurate figures are difficult to establish as the UK does not provide disaggregated data.

[4] Europol “TE-SAT 2014: EU Terrorism Situation and Trend Report,” available at

[5] American Deaths in Terrorist Attacks, 2016

[6] Mythili Sampathkumar “Majority of terrorists who have attacked America are not Muslim, new study finds,” Independent 23rd June 2017, available at

[7]  Deaths in 2015 with ICD10 code W28 (Contact with powered lawnmower). Data from Centers for Disease Control and Prevention, National Center for Health Statistics. Underlying Cause of Death 1999-2015 on CDC WONDER Online Database, released December, 2016. Data are from the Multiple Cause of Death Files, 1999-2015, as compiled from data provided by the 57 vital statistics jurisdictions through the Vital Statistics Cooperative Program. Accessed at

[8] Centers for Disease Control and Prevention

[9] Linda Qiu “Fact-checking a comparison of gun deaths and terrorism deaths,” 5th October 2015, available at

[10] Robert A. Pape, James K. Feldman (2010) “Cutting the Fuse, The Explosion of Global Suicide Terrorism and How to Stop It,” p. 20. See also Pape’s 2004 study, “Dying to Win: The Strategic Logic of Suicide Terrorism”

[11] Ibid. pp. 38-39.

[12] Ibid.

[13] Pew Research Centre “The World’s Muslims: Religion, Politics and Society,” 30th April 2013, available at

[14] Ibid. “Appendix A: U.S. Muslims — Views on Religion and Society in a Global Context,” available at

[15] “Muslim Publics Oppose Al Qaeda’s Terrorism, But Agree With Its Goal of Driving US Forces Out,” 24th February 2009, available from  Two polls conducted in 2006 by Pew and Terror Free Tomorrow reported that ‘Strong opposition to terrorism was found among Muslims in seven out of ten countries polled by Pew. This is especially true in the Muslim populations of Indonesia, Pakistan and Turkey, where six in ten or more say that “suicide bombings and other forms of violence against civilian targets” are “never justified.” The TFT poll of Indonesia and Pakistan found even bigger numbers rejecting all attacks on civilians. Pew also found complete rejection of terrorism among very large majorities of Muslims living in Germany, Britain, Spain and France. Trend line data available for some countries also show a significant increase in those taking this position in Indonesia and a remarkable 23 point increase in Pakistan. Only Turkey showed a slight downward movement.’ ( “Large and Growing Numbers of Muslims Reject Terrorism, Bin Laden,” 30th June 2006, available at )