Last month at the Open University, I not-quite-livetweeted Tim Hutchings’s excellent talk on digital bibles. Last week at King’s College London, I found myself – for the first time ever! – being livetweeted (actually livetweeted, no time delays). I’d been liveblogged before, but this was different. So forgive my gauche enthusiasm, but I can’t get over the novelty. It also formed a tidy little record of what I spoke about – as opposed to what I thought I might speak about, or what I promised to speak about. Thanks are due to everyone, but especially to Simon Rowberry.
Last Thursday, I attended Tim Hutchings’s ‘CyberBibles’ seminar, organised by Francesca Benatti for the Digital Humanities Research Network at the Open University (this is the same seminar series within which Ann Hewings and I spoke about the teaching of corpus linguistics a couple of months ago; like Ann and I, Tim is more of a social scientist than a humanist, but nobody seems to have complained so far about this dilution of things digitally humanistic). If you weren’t there, you missed a treat. On one level, this was an extraordinarily in-depth study of electronic reading and its differences from the reading of print, using a highly specific case study. On another level, the Bible will always be at the same time one of the most interesting possible case studies in textual culture and something rather more than a case study, regardless of whether you’re interested in the digital, print, or manuscript eras. On yet another level… no, this is just silly. I don’t have to say why it was an interesting topic; that should be obvious. And in any case, the current introductory preamble is in danger of overwhelming this entire blog article. Just read the rest, it won’t take long. It’s mostly tweets!
Time, date, location
The Open University, Hawley Crescent, Camden Campus, London, NW1
Friday November 15, 2013
I’ve just received details of my forthcoming seminar, ‘Network analytic approaches to the production and propagation of literary and artistic value’, at the Centre for e-Research (CeRch) at King’s College London. It will take place at 6.15pm on Tuesday 1 October in the Anatomy Museum Space on the 6th floor of the King’s Building at the main KCL campus on the Strand. As you can see from the abstract, the focus will be on methodology and its theoretical implications (my approach emerges from Bourdieu’s sociology but employs social network analysis: two things that are often assumed to be in opposition). However, I’ll be illustrating everything with details from my empirical research on interactive fiction and a couple of other ongoing projects where I also look at relationships between cultural producers (early 20th century authors probably; contemporary visual artists possibly; maybe also something on electronic musicians). I may find time to talk about the specific digital tools that I’ve been using (for those who care about such things: Python 2.7, NetworkX, PyGraphviz).
In case anyone’s interested, the abstract is now available for Ann Hewings’s and my paper in the Digital Humanities in Practice series, ‘Corpus linguistics as distant reading?’ We’ll be presenting it to the Digital Humanities Thematic Research Network at the Open University in Milton Keynes, from 12.00 on 4 July. The event goes on until 14.00, but that’s including lunch. Thanks to Francesca Benatti for inviting us and organising everything! Neither Ann nor I is a digital humanist, but Francesca assuredly is, so I shall trust her judgement that this is a good idea and look forward to some interesting discussion.
The Open University, Milton Keynes
Christodoulou Meeting Room 01
24th June 2013
The commonplace understanding of reading as an essentially private activity is challenged not only by the very vocal kinds of reading carried out in classrooms, literary festivals, or reading groups (book clubs) but also by the important role it has played in social and political conflict.
…that the furore over my ‘Managerial humanities’ blog article has blown over (on the subject of which: a big thank you to Digital Humanities Now for making it their Editors’ Choice for the week of 30 April-6 May, to The New Inquiry for including it on the Sunday Reading List for 19 May, and to the Cyborgology people at The Society Pages for featuring it in an In Their Words list for 19 May), I am glad to report that I can at last get back to speaking to people face-to-face instead of endlessly arguing on the internet.
As we all know, the digital humanities are the next big thing. A couple of years ago, I gave a presentation at a digital humanities colloquium, explaining what I saw as the major reasons for this (Allington, 2011). We are working within an economic system in which owners of capital (funders) invest in research speculatively purchased in advance from the owners of the means of knowledge production (universities), with permanent employees of the latter (what North Americans call ‘faculty’) playing the role of brokers between the two (both as writers and as reviewers of grant applications) and managing the precariously-employed sellers of labour (junior academics and support staff on temporary contracts) who actually get things done. Humanities research is traditionally cheap, which is bad from at least two points of view: funders want to save money by administering fewer, larger, grants, while universities want to see every department generating research income on a par with that pulled in by STEM centres. The digital humanities come to the rescue by being so conveniently expensive: they appear not merely to profit from but to require such costly things as computer hardware, server space, and specialised technical support staff who – in a further benefit from the point of view of the ethically-indifferent university – can be employed on fixed-term contracts, instantly disposed of when the period of funding comes to an end, and almost as instantly replaced once the next grant is landed. It didn’t have to be like this: computers can as easily reduce as increase the size of a research project. In the funding game, however, the goal is not quality, nor even efficiency, but only bigger and bigger contracts. This is the context within which the digital humanities have fashioned themselves from their less tiresomely glamorous predecessor, ‘humanities computing’.
By and large, ‘theory’ enters into literary studies as a body of texts to be related to the texts that constitute ‘literature’. This process of relating is typically carried out through the production of ‘readings’ of specific works – unsurprisingly, since such production has (since the New Criticism) been enshrined as the central form of specifically literary research. There is little enthusiasm, on the whole, for asking whether a theory is coherent, or whether it is adequately supported by evidence, or whether it is consistent with other things that are known, or whether it explains observable facts more parsimoniously than other available theories. Asking these questions would not amount to a recognised form of literary research; persistent askers might even be accused of philosophy. Implicit in this system is the conception of a good theory as one that enables its user to produce a publishable reading. ‘Good theories’, in this sense, have been found just about everywhere, from linguistics to psychoanalysis and the speculative margins of cognitive science. As Jonathan Culler – one of the most prominent literary theorists to challenge this orthodoxy – argues, Continue reading “On literary theory”