Last Sunday, I published an essay on this blog setting out what I saw as the problems with arguments for open access and with the specific form of open access that is now official policy in the UK (Allington, 2013). Despite the fact that it mis-read Paul Fyfe’s (2012) critique of certain tendencies as an endorsement, it received some lovely comments, and I was deeply honoured to have my arithmetic corrected by the co-creator of CWEB (Levy, 2013). However, it has been pointed out that the essay was rather long (people were kind enough not to say ‘rambling’). Here’s a shorter (although still not exactly short) version, which focuses on what’s happening now in the UK. If you’re not in the UK, I hope you’ll still find it of interest as a discussion of what you might want to try to prevent from happening where you are. The open access movement appeals to many different interests, and once a specific form of open access becomes official policy, at least some of those interests are bound to be disappointed. Casey Brienza has analysed the movement much more incisively than I did in my blog essay, so rather than reprise my arguments I shall simply quote hers before moving to a consideration of UK policy:
Continue reading “Open access in the UK”
In the last two or three years, open access to academic journal articles has gone from being something that noisy idealists were unrealistically demanding to something that’s going to happen whether we like it or not – at least in the UK, and probably elsewhere as well. Not so long ago, I was in favour of it and doing what I could to put it into practice with regard to my own work. Now, it’s just another of those things that I must pragmatically accept, like the vice-chancellor’s high level appointments. I feel like a man with a beard in a country where shaving has just been banned.
And all this has made me reflect. On open access: what’s it for? What did its advocates (me, for example) think it was going to facilitate? And now that it’s become mainstream, does it look as if it’s going to facilitate that thing we had in mind, or something else entirely? Quite recently, it would have been almost dangerous to think in such terms, because people were getting so cross – perhaps inevitably, as the conversation was largely taking place online, and it’s been argued that social media disseminate anger more effectively than any other emotion (Fan et al, 2013). But now that there’s no point in anyone’s getting cross – now that it’s all happening anyway, regardless of who’s in the vanguard and who’s a bourgeois reactionary – perhaps it’s becoming possible to see things a little more clearly. I must admit that I backed the wrong team: I was a supporter of one kind of open access, but it looks as if the argument for the other has carried the day. And now that the arguing is by-the-by, it all feels so different. The more I look back, the more I realise that open access had been proposed as the solution to a range of problems some of which had very little to do with one another. The more I look forward, the more I realise that among those problems were some that might actually be exacerbated by the form of open access that has become official policy in the UK – and others that were never likely to be addressed by any form of open access (including the one in which I believed).
Be careful what you wish for, the saying goes. As a sort of penance, I have chosen to think the issues through not in an academic journal article but in an essay on this blog. Not quite the use for which I originally intended the latter, but a symbolically apt use just the same.
Continue reading “On open access, and why it’s not the answer”
Replying, retweeting, and the acknowledgement of other people’s contributions to a conversation
I recently got involved in a discussion about the best way to acknowledge other people’s tweets while interacting on Twitter. It turned out that neither I nor the person I was talking to was absolutely sure of the implications of each of the various options available, and that while guidelines were available from Twitter itself (short version: use the buttons we made for you!), no-one seemed to have written an explanation of what each option might do to help or hinder a Twitter user in understanding and joining in a conversation that hasn’t involved him or her from the beginning. There are guides that recommend the opposite of what I – having looked into this weighty matter – consider to be the best approach, but I have no time for a flamewar right now (does anybody ever have time for a flamewar?), so I will just go ahead and explain what I think is right without attempting to refute the arguments of unnamed people who have (in my humblest of opinions) got it all wrong. The nub of the matter is twofold: the question of whether it is better to reply to a tweet by clicking the ‘Reply’ button or to reply to it by cleverly typing the tweeter’s screen name at the start of a new tweet, and the related question of whether it is better to re-distribute another person’s tweet to one’s followers by clicking the ‘Retweet’ button or by cleverly typing the letters ‘RT’ or ‘MT’ into a new tweet, adding the screen name of the original tweeter, and then copy-pasting the text of the tweet (or at least some of it) into whatever’s left of your 140-character allowance. There is then the secondary question of what difference it makes to add an additional character (conventionally a dot) to the beginning of a tweet before a person’s screen name, and whether it makes the same difference to do so after clicking ‘Reply’.
Continue reading “Twitter conversations, and why it’s just nicer to use the ‘reply’ and ‘retweet’ buttons”
In a recent blog post, Tom Campbell (2013) pondered the end of what has come to be known, since Richard Florida (2002), as the ‘creative class’ (or as Florida himself might prefer, the ‘Creative Class’). For those of you that don’t know, this social group is supposed to consist of ‘people who add economic value through their creativity’, including various kinds of ‘knowledge workers, symbolic analysts, and professional and technical workers’ who ‘engage in work whose function is to “create meaningful new forms”.’ (Florida, 2002, p. 68). Florida suggests that this class cannot be associated with the bourgeoisie of classical Marxist analysis because it is not defined by possession of property as Marx would have understood it: ‘Most members of the Creative Class [sic] do not own and control any significant property in the physical sense. Their property… is an intangible because it is literally in their heads.’ (ibid.) The latter statement seems remarkable only if one takes a superficial reading of Marx to be the last word on class. In fact, it describes a general characteristic of skilled non-manual workers, including members of the old professions: people whose income derives not from capital they possess but from work they perform, yet whose work commands a relatively high price on the labour market because its performance depends upon scarce forms of expertise. This describes the cool, smart, and quite possibly collar-less white collar workers Florida lauds no more nor less than it does doctors and accountants – and teachers too, whose work is precisely to develop expertise in others. These people belong to what Tony Bennett and colleagues (2009) prosaically call the ‘professional-managerial class’, which is – after the distant elite of politicians, high-ranking executives, celebrities, and the super-rich – the most dominant group in western societies today.
Continue reading “The automation of intellectual labour: creative or otherwise, we’re all just workers in the end”
As we all know, the digital humanities are the next big thing. A couple of years ago, I gave a presentation at a digital humanities colloquium, explaining what I saw as the major reasons for this (Allington, 2011). We are working within an economic system in which owners of capital (funders) invest in research speculatively purchased in advance from the owners of the means of knowledge production (universities), with permanent employees of the latter (what North Americans call ‘faculty’) playing the role of brokers between the two (both as writers and as reviewers of grant applications) and managing the precariously-employed sellers of labour (junior academics and support staff on temporary contracts) who actually get things done. Humanities research is traditionally cheap, which is bad from at least two points of view: funders want to save money by administering fewer, larger, grants, while universities want to see every department generating research income on a par with that pulled in by STEM centres. The digital humanities come to the rescue by being so conveniently expensive: they appear not merely to profit from but to require such costly things as computer hardware, server space, and specialised technical support staff who – in a further benefit from the point of view of the ethically-indifferent university – can be employed on fixed-term contracts, instantly disposed of when the period of funding comes to an end, and almost as instantly replaced once the next grant is landed. It didn’t have to be like this: computers can as easily reduce as increase the size of a research project. In the funding game, however, the goal is not quality, nor even efficiency, but only bigger and bigger contracts. This is the context within which the digital humanities have fashioned themselves from their less tiresomely glamorous predecessor, ‘humanities computing’.
Continue reading “The managerial humanities; or, Why the digital humanities don’t exist”
By and large, ‘theory’ enters into literary studies as a body of texts to be related to the texts that constitute ‘literature’. This process of relating is typically carried out through the production of ‘readings’ of specific works – unsurprisingly, since such production has (since the New Criticism) been enshrined as the central form of specifically literary research. There is little enthusiasm, on the whole, for asking whether a theory is coherent, or whether it is adequately supported by evidence, or whether it is consistent with other things that are known, or whether it explains observable facts more parsimoniously than other available theories. Asking these questions would not amount to a recognised form of literary research; persistent askers might even be accused of philosophy. Implicit in this system is the conception of a good theory as one that enables its user to produce a publishable reading. ‘Good theories’, in this sense, have been found just about everywhere, from linguistics to psychoanalysis and the speculative margins of cognitive science. As Jonathan Culler – one of the most prominent literary theorists to challenge this orthodoxy – argues, Continue reading “On literary theory”