In a powerful essay cheekily posted on the website of what may be the UK’s most obsessively corporate university, Suman Gupta bluntly asserts that ‘[t]here is no place for leaders in academia.’ (2015, parag. 1) As he observes, once academics-turned-administrators begin ‘imposing some Great Order… by managing and strategising and propaganda, seeking compliance and exercising opaque executive prerogatives, they start killing off academic work’ (2015, parag. 2). With its recent series of questionable management initiatives, from concentration of resources on bureaucratically-selected ‘strategic research areas’ to development of a (second) free MOOC platform on its paying students’ tab, Gupta’s employer must certainly have provided him with ample opportunity to judge the truth of this proposition. But the relevance of his critique is much wider than a single institution, as we see from the tragic case of Stefan Grimm: a highly successful medical researcher who committed suicide whilst being threatened over his failure to meet arbitrary funding targets (see Parr 2014). While the killing off of scholarly work does not invariably mean the killing off of scholarly workers, it is clear that, across the UK, the term ‘academic leadership’ is ‘now unequivocally taken [to mean] “management of academic workers and institutions from above”’, and those that practise it have come to be ‘regarded as being worth more than academics of any sort.’ (Gupta 2015, parag. 5) In his last words to his colleagues, the late Prof. Grimm put it more forcefully, describing his employing institution in terms that at least some readers of this article may find resonant: as he saw it, it had become ‘a business with very few up in the hierarchy… profiteering and the rest of us… milked for money’, wherein the ‘formidable leaders’ that do the milking ‘treat us like shit.’ (Grimm 2014, parags. 12, 10, 16, reproduced in Parr 2014) It hardly needs pointing out that there has never been an attempt to demonstrate that academic work benefits from ‘leadership’ in the sense described by Gupta and Grimm: top-down control by target-setting, HR-sanctioned procedural bullying, and ‘strategic vision’. The drive for ‘leadership’ is, rather, part of an ideologically motivated investment in management at the expense of labour, clearly seen in the ballooning of executive salaries, both inside and outside educational institutions, during an age of so-called ‘austerity’.
Last Sunday, I published an essay on this blog setting out what I saw as the problems with arguments for open access and with the specific form of open access that is now official policy in the UK (Allington, 2013). Despite the fact that it mis-read Paul Fyfe’s (2012) critique of certain tendencies as an endorsement, it received some lovely comments, and I was deeply honoured to have my arithmetic corrected by the co-creator of CWEB (Levy, 2013). However, it has been pointed out that the essay was rather long (people were kind enough not to say ‘rambling’). Here’s a shorter (although still not exactly short) version, which focuses on what’s happening now in the UK. If you’re not in the UK, I hope you’ll still find it of interest as a discussion of what you might want to try to prevent from happening where you are. The open access movement appeals to many different interests, and once a specific form of open access becomes official policy, at least some of those interests are bound to be disappointed. Casey Brienza has analysed the movement much more incisively than I did in my blog essay, so rather than reprise my arguments I shall simply quote hers before moving to a consideration of UK policy:
Continue reading “Open access in the UK”
In the last two or three years, open access to academic journal articles has gone from being something that noisy idealists were unrealistically demanding to something that’s going to happen whether we like it or not – at least in the UK, and probably elsewhere as well. Not so long ago, I was in favour of it and doing what I could to put it into practice with regard to my own work. Now, it’s just another of those things that I must pragmatically accept, like the vice-chancellor’s high level appointments. I feel like a man with a beard in a country where shaving has just been banned.
And all this has made me reflect. On open access: what’s it for? What did its advocates (me, for example) think it was going to facilitate? And now that it’s become mainstream, does it look as if it’s going to facilitate that thing we had in mind, or something else entirely? Quite recently, it would have been almost dangerous to think in such terms, because people were getting so cross – perhaps inevitably, as the conversation was largely taking place online, and it’s been argued that social media disseminate anger more effectively than any other emotion (Fan et al, 2013). But now that there’s no point in anyone’s getting cross – now that it’s all happening anyway, regardless of who’s in the vanguard and who’s a bourgeois reactionary – perhaps it’s becoming possible to see things a little more clearly. I must admit that I backed the wrong team: I was a supporter of one kind of open access, but it looks as if the argument for the other has carried the day. And now that the arguing is by-the-by, it all feels so different. The more I look back, the more I realise that open access had been proposed as the solution to a range of problems some of which had very little to do with one another. The more I look forward, the more I realise that among those problems were some that might actually be exacerbated by the form of open access that has become official policy in the UK – and others that were never likely to be addressed by any form of open access (including the one in which I believed).
Be careful what you wish for, the saying goes. As a sort of penance, I have chosen to think the issues through not in an academic journal article but in an essay on this blog. Not quite the use for which I originally intended the latter, but a symbolically apt use just the same.
In a recent blog post, Tom Campbell (2013) pondered the end of what has come to be known, since Richard Florida (2002), as the ‘creative class’ (or as Florida himself might prefer, the ‘Creative Class’). For those of you that don’t know, this social group is supposed to consist of ‘people who add economic value through their creativity’, including various kinds of ‘knowledge workers, symbolic analysts, and professional and technical workers’ who ‘engage in work whose function is to “create meaningful new forms”.’ (Florida, 2002, p. 68). Florida suggests that this class cannot be associated with the bourgeoisie of classical Marxist analysis because it is not defined by possession of property as Marx would have understood it: ‘Most members of the Creative Class [sic] do not own and control any significant property in the physical sense. Their property… is an intangible because it is literally in their heads.’ (ibid.) The latter statement seems remarkable only if one takes a superficial reading of Marx to be the last word on class. In fact, it describes a general characteristic of skilled non-manual workers, including members of the old professions: people whose income derives not from capital they possess but from work they perform, yet whose work commands a relatively high price on the labour market because its performance depends upon scarce forms of expertise. This describes the cool, smart, and quite possibly collar-less white collar workers Florida lauds no more nor less than it does doctors and accountants – and teachers too, whose work is precisely to develop expertise in others. These people belong to what Tony Bennett and colleagues (2009) prosaically call the ‘professional-managerial class’, which is – after the distant elite of politicians, high-ranking executives, celebrities, and the super-rich – the most dominant group in western societies today.
As we all know, the digital humanities are the next big thing. A couple of years ago, I gave a presentation at a digital humanities colloquium, explaining what I saw as the major reasons for this (Allington, 2011). We are working within an economic system in which owners of capital (funders) invest in research speculatively purchased in advance from the owners of the means of knowledge production (universities), with permanent employees of the latter (what North Americans call ‘faculty’) playing the role of brokers between the two (both as writers and as reviewers of grant applications) and managing the precariously-employed sellers of labour (junior academics and support staff on temporary contracts) who actually get things done. Humanities research is traditionally cheap, which is bad from at least two points of view: funders want to save money by administering fewer, larger, grants, while universities want to see every department generating research income on a par with that pulled in by STEM centres. The digital humanities come to the rescue by being so conveniently expensive: they appear not merely to profit from but to require such costly things as computer hardware, server space, and specialised technical support staff who – in a further benefit from the point of view of the ethically-indifferent university – can be employed on fixed-term contracts, instantly disposed of when the period of funding comes to an end, and almost as instantly replaced once the next grant is landed. It didn’t have to be like this: computers can as easily reduce as increase the size of a research project. In the funding game, however, the goal is not quality, nor even efficiency, but only bigger and bigger contracts. This is the context within which the digital humanities have fashioned themselves from their less tiresomely glamorous predecessor, ‘humanities computing’.