Decoding Digital Humanities

I went to an interesting meeting yesterday of the London chapter of Decoding Digital Humanities, in the pleasant surroundings of the Jeremy Bentham pub near UCL. Called to mind Melissa Terras’s keynote at DH2010 , in which she highlighted the story in which the great man’s body is wheeled into UCL Senate meetings and recorded as ‘present not voting’.

We had an interesting discussion based around Alan Liu’s 2003 paper to the MLA, ‘The Humanities: A Technical Profession‘, which focused around the institutional nature of the humanities. For example, it was noted – as has often been in the past – that formal value systems are often difficult to apply to humanities research (or perhaps humanities ‘scholarship’, this distinction being that scholarship is the process of aggregating knowledge over years, rather than the more task-oriented conception of ‘research’ to answer specific questions).  It could be argued that the concept of ‘excellence’, for example, can be far more easily applied to research based on high quality, verifiable data from experiments, rather than a monograph representing the outcome of months or years of interpretive research. This I think also raises the question of ‘repeatability’, which has come up in other contexts — it’s a generally accepted tenet of scientific practice that in order for any experiment to be valid, it must be repeatable by other scientists in other labs under  comparable conditions.

One very interesting aspect of this discussion was the idea of language as a tool in itself. Richard Lewis pointed out that in this very discussion we critiqued the semantic meaning of the word ‘science’, so the kind of uncritical assumption that one might make about a tool’s functionality need not always apply. The present-day definition of the word science, of course, contains its meaning within some government-ordained STEM framework; but it’s worth noting, for example, the more comprehensive meaning of a word such as Wissenschaft, whose application in fields such as Classics and philology has been explored elsewhere by Greg Crane.

Arguably, one major factor that drove the sciences and humanities apart in the nineteenth century was the sheer complexity of the tools and infrastructure that the latter adopted (more on this digression on Craig Bellamy’s 2Cultures blog). So the important questions – and, I think one of the interesting issues to come out of Liu’s paper – are how do we identify and categorize components of e-infrastructure that a) we need in for the humanities, b) those that we do *not* need for the humanities and c) which would we like to adapt, applying some kind of meaningful cost/benefit test of the adaption process to (humanities) research questions. All this will no doubt come up at the forthcoming Supporting the Digital Humanities conference in Vienna.

Author: Stuart Dunn

I do various things, but mainly I am Professor of Spatial Humanities at King's College London's . My interests include things computational, cartographic and archaeological.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: