My article ‘ “Hitler had a valid argument against some Jews”: repertoires for the denial of antisemitism in Facebook discussion of a survey of attitudes to Jews and Israel’, which was published online in April this year, has now appeared in the August issue of Discourse, Context & Media. It explains the background to the antisemitism crisis that has now engulfed the Labour Party leadership, then analyses some of the ways in which Labour supporters deny the existence of antisemitism before looking at how the largest unofficial Labour Party Facebook group makes the problem worse by readily expelling those who challenge antisemitism but only expelling antisemites for extreme transgressions.
‘Centrists,’ Jeremy Corbyn’s former official spokesman tells us, ‘love being talked down to by people in power as it provides false reassurance that they must be better than them.’ The centrist, another Corbynite B-lister explains, ‘has the weary, lecturing tone of a frustrated parent absolutely nailed on.’ But why be so mealy mouthed? As the man who came closer than anyone to embodying the spirit of the ongoing neo-Tankie renaissance memorably put it, ‘better a thousand honest fascists than some glistening sleaze who’s “neither left nor right.”‘
Is it wise to alienate voters who don’t identify with the left or the right? Some have taken last year’s general election result to demonstrate that it is. The Hammer of the Moderates himself, Owen Jones, has argued that an appeal to moderation can make no electoral sense ‘at a time when more than 80% of the electorate voted for a left-led Labour party or the Brexiteer Tories.’ But given that Labour lost that election by a wider margin of seats than it did under Gordon Brown, that argument seems a little odd. Surely losing an election to the Conservatives after publicly abandoning the centre doesn’t indicate that publicly abandoning the centre was the right thing for Labour to do?
But I think I’ve got it figured out now. These people live their lives on Twitter, where the best way of getting your voice heard is to start a fight that others will want to join in. Moderation never gets much traction there. It’s not like in the outside world:
Data courtesy of the British Election Study.
This is the third and final part of my preliminary analysis of groups of voters defined by the choices they made in the 2015 general election, the 2016 European Union membership referendum, and the 2017 general election (c.f. Stephen Bush’s nine voter groups), using an English subset of responses to the British Election Study’s post-election face-to-face survey. In the first part, I looked at the ten largest groups, from Conservative-Leave-Conservative to Conservative-Remain-Labour, both in terms of their size and in terms of their self-declared likelihood to vote for various parties in future, and found that Labour Remainers were not only more numerous but (on their own assessment) more likely to be poached than Labour Leavers, while the smaller group of Conservative Remainers who had switched to voting Labour were quite likely to switch again. In the second part, I looked at six groups of voters who had in common that they could have voted but did not in the 2015 general election, finding that most of them did not vote either in the 2016 referendum or the 2017 general election, and that only the minority who voted Remain in the 2016 referendum were more likely than not to have voted in the 2017 general election.
To finish up for now, here’s a single chart showing all voter groups which participated in the 2017 general election (weighted by demographic group and by 2017 vote). Each quarter of the chart below shows the members of the sample who voted for one of the four main parties. These voters are further subdivided into columns to show how they voted in the referendum and into coloured blocks to show who they voted for in 2015 (note that black covers both non-voting and voting outside the four main parties, which most often meant voting Green as the data are from England only):
On Friday, I posted some analysis of groups of English voters defined by the combinations of choices they made in a succession of votes. That was the first installment of a multi-part response to Stephen Bush’s recent article on why we should stop focusing so obsessively on people who voted Labour in 2015 and then voted to leave the European Union in 2016. I’d now like to take a look at those who didn’t vote at all in the 2015 general election.
Excluding those who did not vote because they were ineligible, there were 290 GE2015 non-voters in the dataset that I’m using: an English subset of the post-election 2017 face-to-face survey carried out as part of the long-running and hugely respected British Election Study. The 290 become 311 or more if we weight for demographic group, as I did for Friday’s analysis – which indicates that the non-voters were from demographic groups that were under-represented in the sample as a whole. (It’s only slightly less difficult to get non-voters to answer a survey than it is to get them into a polling booth, as we see from the fact that just 15% of the sample did not vote in an election with 66% turnout.) But because 290 is a small sample and weighting tends to magnify the effect of sampling error, I’ve used unweighted counts throughout this post (not that weighting made an appreciable difference to any of the patterns I will talk about below). The following alluvial diagram (created using the R package,
ggalluvial) tracks the voting behaviour of sampled 2015 non-voters post-2015:
Like many, I read with interest Stephen Bush’s recent article on ‘The nine voter groups who are more important than Labour Leavers’. If Bush were a grant awarding institution, there would be money available for researching those groups. Well, he isn’t, so there isn’t, but I like a challenge so I’m going to make a start anyway – using open data from the British Election Study (henceforth, BES). To be more specific, I’ll be using the BES 2017 face-to-face survey, which was conducted after the election and uses what should probably be considered a more genuinely random sample than the online waves.
My analysis of the local election vote in Barnet has now been quite widely discussed, including on pages one and six of last week’s Jewish Chronicle. But while the chart I created does tell the story of the election, it’s a little difficult to read because it tries to show the whole story, with rises and falls in vote share for all four main parties. This leaves the reader to work out that there’s really just one central narrative of shifts between the two largest of those four parties, while nothing really happens with the remainder.
For that reason, I’ve now produced a simpler chart which shows only the Labour to Tory swing:
Theresa May argues that her party’s success in re-taking control of Barnet Council was a result of Jewish voters ‘reject[ing] the vile antisemitism’ within the Labour Party. Do the numbers back her up? Yes. Just look at the chart above, which is based on ward-by-ward voting figures and the 2011 census. Labour picked up votes only in those parts of Barnet where the Jewish population was low; the more Jews there were within a ward, the more likely it was to lose them instead.
The red trend line falls more steeply than the blue trend line climbs, suggesting that not all Labour’s lost Jewish votes will have gone to the Tories. And in the wards with few Jewish voters, Labour tended to gain about as many votes as the Green Party lost, giving us a clue as to where the votes it picked up came from, and why. But the message here is clear, and much starker than in June, when the ex-Labour vote in Jewish areas seems to have gone to the Liberal Democrats rather than the Tories.
That message is that many left-leaning Jews now see Labour as an institutionally antisemitic organisation that must be kept from power even at the cost of voting for a party on the opposite side of the political spectrum.
This manuscript has been accepted for publication in the peer-reviewed academic journal, Discourse, Context & Media. By agreement with the publisher, it can be distributed on this website. For offline reading, a PDF copy is available for download (although if you wish to share it with others, please direct them to this page rather than sending the file directly). EDIT (13 April 2018): The version of record is now available online via ScienceDirect at https://doi.org/10.1016/j.dcm.2018.03.004 ahead of print publication. EDIT (4 August 2018): The article has appeared in the August issue of Discourse, Context & Media (vol. 24), and is available online at the same address (although now with correct pagination).
Author: Daniel Allington, University of Leicester
Journal: Discourse, Context & Media
Received at editorial office: 11 Dec 2017
Article revised: 16 Mar 2018
Article accepted for publication: 21 Mar 2018
Article available online via ScienceDirect: 12 April 2018
Article available in print: 21 July 2018
Bibliographic reference: Allington, D. (2018) ‘ “Hitler had a valid argument against some Jews”: Repertoires for the denial of antisemitism in Facebook discussion of a survey of attitudes to Jews and Israel’. Discourse, Context, and Media 24: 129-136.
Discourse analytic research suggests that, in contemporary liberal democracies, complaints of racism are routinely rejected and prejudice may be both expressed and disavowed in the same breath. Historical and quantitative research has established that – both in democratic states and in those of the Soviet Bloc (while it existed) – antisemitism has long been related to or expressed in the form of statements about Israel or Zionism, permitting anti-Jewish attitudes to circulate under cover of political critique. This article looks at how the findings of a survey of anti-Jewish and anti-Israeli attitudes were rejected by users of three Facebook pages associated with the British Left. Through thematic discourse analysis, three recurrent repertoires are identified: firstly, what David Hirsh calls the ‘Livingstone Formulation’ (i.e. the argument that complaints of antisemitism are made in bad faith to protect Israel and/or attack the Left), secondly, accusations of flawed methodology similar to those with which UK Labour Party supporters routinely dismiss the findings of unfavourable opinion polls, and thirdly, the argument that, because certain classically antisemitic beliefs pertain to a supposed Jewish or ‘Zionist’ elite and not to Jews in general, they are not antisemitic. In one case, the latter repertoire facilitates virtually unopposed apologism for Adolf Hitler. Contextual evidence suggests that the dominance of such repertoires within one very large UK Labour Party-aligned group may be the result of action on the part of certain ‘admins’ or moderators. It is argued that awareness of the repertoires used to express and defend antisemitic attitudes should inform the design of quantitative research into the latter, and be taken account of in the formulation of policy measures aiming to restrict or counter hate speech (in social media and elsewhere).
Keywords: anti-Semitism; anti-Zionism; denial of racism; attitudes; Zionism; Israel; Jews; Labour Party; Facebook; social media
Last year, I did some analysis of how respondents to surveys carried out as part of the British Election Study placed themselves and the main British political parties on a left-right scale. This suggested that, despite what the election results might lead one to expect, there appeared to be no leftward shift amongst voters between the 2015 and 2017 general elections, although there was a strong leftward shift in their perceptions of the Labour Party. One thing I couldn’t explore using the type of analysis and visualisation I carried out there is whether the same people were identifying themselves and the two main parties with the Left, the Right, and the Centre, or whether it was different people but in similar numbers. Because the BES is a longitudinal study, repeatedly surveying the same individuals (so far as is possible), we can reasonably ask this question. But how can we answer it? One way is by using alluvial diagrams, a form of visualisation developed in order to visualise change over time. (If you want to know how to make your own, there’s a guide to creating alluvial diagrams with R in the longer version of this article.)
This alluvial diagram shows that people tended to give the same answers in 2015 and 2017, and that any movement tended to be balanced by approximately equal movement in the other direction, except in that there was more movement from the Right to the Centre than from the Centre to the Right (despite which, the Right remained larger than the Left overall). This supports the view that there was no leftwards shift on the part of the electorate between 2015 and 2017. But what about the major parties?
I recently started to do some work with NSS (National Student Survey) data, which are available from the HEFCE website in the form of Excel workbooks. To get the data I wanted, I started copying and pasting, but I quickly realised how hard it was going to be to be sure that I hadn’t made any mistakes. (Full disclosure: it turns out that actually I did make some mistakes, e.g. once I left out an entire row because I hadn’t noticed that it wasn’t selected.) Using a programming language such as R to create a script to import data requires much more of an investment of time upfront than diving straight in and beginning to copy and paste but the payoff is that once your script works, you can use it over and over again – which is why I now have several years’ worth of NSS data covering all courses and institutions, from which I can quite easily pull out whichever numbers I want using a dplyr
filter statement (as long as I am prepared to take account of irregularities e.g. in institutions’ names from one year to the next – which would also be necessary when doing things by point-and-click).
For example, looking at how all institutions performed in my particular discipline with regard to the four NSS questions relating to teaching quality, I can see that Media Studies at the University of the West of England managed the quite remarkable feat of rising from 68th place in 2015 to 2nd place in 2016 before falling back to 53rd place in 2017. To visualise only these four questions in relation to this subject at this institution over the whole time period for which I have data, I can filter out everything relating to other disciplines and other institutions with a single statement, and then use ggplot to represent each of the four variables that I’m interested in with a different coloured line:
How could such a dramatic rise and fall occur? Maybe someone who still works at UWE would be better placed to explain. But the general question of what drives student perceptions of teaching quality is one that I’m interested to explore as a researcher – and I’ll be posting thoughts and findings here as and when.
In the meantime, here’s my code, presented as an example of how the automation of error-prone tasks can take some of the uncertainty out of the research process. You probably aren’t interested in working with this particular dataset, but you may have other datasets that you would like to deal with in the same way. Yes, it looks complicated if you’re not used to scripting – but the code is actually quite simple, and the thing is that I was able to build it up iteratively, by adding statements, running the script as a whole, noticing what went wrong, and then fixing whatever it was, one step at a time. (The code is very heavily commented, to give a non-coder an idea of what those steps were and what sort of thinking is typically involved in taking a code-based rather than point-and-click-based approach to data importing etc.)