E-Research on Text and Images

This week I’ve been at a seminar at the British Academy, E-Research on Text and Images, organized by the e-Science for the Study of Ancient Documents project at Oxford’s Centre for the Study of Ancient Documents. This project focuses on a collaboration between medical image processing and papyrology. The principle is relatively straightforward: the Roman texts that the project is studying – the Vindolanda writing tablets – are very imperfectly preserved, either as inkstrokes on the original material, or as incisions left on wooden writing tablets originally covered by wax, which was written on with a stylus. Images of these surfaces can be captured using steroscopic imaging, which systematically compares the lighting from different angles, thus showing up inconsistencies in the surface that might otherwise be invisible. Project postoc Segolene Tarte outlined the approach in her paper, stressing that this was a formal technical way of supporting the kind of thing that papyrologists would do anyway without recourse to technology. Good examples were given of how this approach can lead to different interpretations of individual words, which can have significant impact on the overall interpretation of the text – one key example being the ‘Frisian Ox Sale’, where imaging supported a re-reading of a word previously interpreted as ‘BOVEM’ in fact read ‘QUEM’. This casts in to doubt the document’s overall identification as a bill of sale for an ox (see Bowman et. al.’s paper). Next up, Simon Tanner of CCH presented work in his involvement with the Dead Sea Scrolls digitization project. This gave an interesting perspective on how documents are treated once they recovered. The noble aspirations of the Israel Antiquities Authority to make the Scrolls visible to the wider public came inevitably unstuck in places, due to imperfect understanding of conservation practices in the 1950s but more so perhaps because of the rather unsystematic way in which the fragments of scrolls were classified and grouped at the time. This raised an issue which emerged as one of overarching importance: that it is essential to document what we are doing, why we do it, and when we did it.

Documentation of process is critical not only to the integrity of conservation – which is too often, I think, seen as a purely physical activity – but as we enter deeper engagement between the humanities and digital technologies, it becomes a central part of the intellectual process too. Melissa Terras and Henriette Rouede-Cunliffe picked up on this in their presentations on formal models of reading papyri and supporting and documenting decision making processes. A comment from the floor made the point that, historically learned institutions like the Royal Society call their publications ‘Transactions’ or ‘Proceedings’; which reflect intellectual processes and the exchange of knowledge, rather than the kind of ‘here is my final scholarly outcome which will go in the library and stay there forever’ that the current reward, credit and funding systems in the humanities require of us. This led to a substantial last-minute rewriting of my own presentation on the following day, which tried to make the point that reconstruction of cultural heritage sites (and indeed artefacts) have in the past been consciously constructed ‘finished’ entities, often with their own intellectual and/or political messages. However, I suspect that technology may be presenting us with certain opportunities to begin to express and encode the processes that lead us to reconstruct sites in different ways. One of these opportunities lies in motion capture, and the recording of present-day spatiality. Robert Shoemaker‘s paper immediately before mine focused on linking and integrating textual material, including material from the Old Bailey Online project. Whilst such texts may not face the kind of physical or conservational problems that payprologists face with material such as Vindolanda, this was a nice reminder that reading a text can mean many different things, and that quantitative understanding of the reading process is often the key to understanding the text itself. Of course all of this needs e-Infrastructure. The papers of Dot Porter on the TILE project, and John Pybus on Oxford’s BVREH work both set out different approaches of how virtual research environments can support this kind of work.

Oxford’s Mike Brady – Co-I of eSAD, and an expert of medical imaging – summed it all up nicely when he noted that the Two Cultures of CP Snow should be rejected, and that the humanities and sciences face exactly the same kind of linear, interpretive problems. The challenge is how we document the processes that lead us to the answers.

Author: Stuart Dunn

I do various things, but mainly I am Professor of Spatial Humanities at King's College London's . My interests include things computational, cartographic and archaeological.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: