Digital Classicist: Developing a RTI System for Inscription Documentation in Museum Collections and the Field

In the first of this summer’s Digital Classicist Seminar Series, Kathryn Piquette and Charles Crowther of Oxford discussed Developing a Reflectance Transformation Imaging (RTI) System for Inscription Documentation in Museum Collections and the Field: Case studies on ancient Egyptian and Classical material . In a well-focused discussion on the activities of their AHRC DEDEFI project of (pretty much) this name, they presented the theory behind RTI and several case studies.

Kathryn began by setting out the limitations of existing imaging approaches in documenting inscribed material. These include first hand observation, requiring visits to archives sites, museums etc. Advantages are that the observer can also handle the object, experiencing texture, weight etc. Much information can be gathered from engaging first hand, but the costs are typically high and the logistics complex. Photography is relatively cheap and easy to disseminate as a surrogate, but it fixed light position one is stuck with often means important features are missed. Squeeze making overcomes this problem, but you lose any sense of the material, and do not get any context. Tracing has similar limitations, but there is the risk of other information being filtered out. Likewise line drawings often miss erasures, tool marks etc; and are on many occasions not based on the original artefact anyway, which risks introducing errors. Digital photography has the advantage of being cheap and plentiful, and video cann capture people engaging with objects. Laser scanning resolution is changeable, and some surfaces do not image well. 3D printing is currently in its infancy. The key point is that all such representations are partial, and all impose differing requirements when one comes to analyse and interpret inscribed surfaces. There is therefore a clear need for fuller documentation of such objects.

Shadow stereo has been used by this team in previous projects to analyse wooden Romano British writing tablets. These tablets were written on wax, leaving tiny scratches in the underlying wood. Often reused, the scratches can be made to reveal multiple writings when photographed in light from many directions. It is possible then to build algorithmic models highlighting transitions from light to shadow, revealing letterforms not visible to the naked eye. The RTI approach used in the current project was based on 76 lights on the inside of a dome placed over the object. This gives a very, very high definition rendering of the object’s surface in 3D, exposed consistently by light from every angle. This ‘raking light photography’ takes images taken from different locations with a 24.5 megapixel camera, and the multiple captures are combined. This gives a sense not only of the objects surface, but of its materiality: by selecting different lighting angles, one can pick out tool marks, scrape marks, fingerprints and other tiny alterations to the surface. There are various ways of enhancing the images, all of which are suitable for identifying different kinds of feature. Importantly, as a whole, the process is learnable by people without detailed knowledge of the algorithms underlying the image process. Indeed one advantage of this approach is it is very quick and easy – 76 images can be taken in around in around five minutes. At present, the process cannot handle large inscriptions on stone, but as noted above, the highlight RTI allows more flexibility. In one case study, RTI was used in conjunction with a flatbed scanner, giving better imaging of flat text bearing objects. The images produced by the team can be viewed using an open source RTI viewer, with an ingenious add-on developed by Leif Isaksen which allows the user to annotate and bookmark particular sections of images.

The project has looked at several case studies. Oxford’s primary interest has been in inscribed text bearing artefacts, Southampton’s in archaeological objects. This raises interesting questions about the application of a common technique in different areas: indeed the good old methodological commons comes to mind. Kathryn and Charles discussed two Egyptian case studies. One was the Protodynastic Battlefield Palette. They showed how tools marks and making processes could be elicited from the object’s surface, and various making processes inferred. One extremely interesting future approach would be to combine RTI with experimental archaeology: if a skilled and trained person were to create a comparable artefact, one could use RTI to compare the two surfaces. This could give us deeper understanding about the kind of experiences involved in making an object such as the battlefield palette, and to base that understanding on rigorous, quantitative methodology.

It was suggested in the discussion that a YouTube video of the team scanning an artefact with their RTI dome would be a great aid to understanding the process. It struck me, in the light of Kathryn’s opening critique of the limitations of existing documentation, that this implicitly validates the importance of capturing people’s interaction with objects: RTI is another kind of interaction, and needs to be understood accordingly.

Another important question raised was how one cites work such as RTI. Using a screen grab in a journal article surely undermines the whole point. The annotation/bookmark facility would help, especially in online publications, but more thought needs to be given to how one could integrate information on materiality into schema such as EpiDoc. Charlotte Roueche suggested that some tag indicating passages of text that had been read using this method would be valuable. The old question of rights also came up: one joy of a one-year exemplar project is that one does not have to tackle the administrative problems of publishing a whole collection digitally.

MiPP: Forming questions

The question about our MiPP project which I’m most often asked is ‘why?’ In fact that this is the whole project’s fundamental research question. As motion capture technologies become cheaper, more widely available, less dependent on equipment in fixed locations such as studios, and less dependent on highly specialist technical expertise to set them up and use them, what benefits can these technologies bring outside their traditional application areas such as performance and medical practice? What new research can they support? In such a fundamentally interdisciplinary project, there are inevitably several ‘whys’, but as someone who is, or at least once was, an archaeologist, archaeology is the ‘why’ that I keep coming back to. Matters became a lot clearer, I think, in a meeting we had yesterday with some of the Silchester archaeological team.

As I noted in my TAG presentation before Christmas, archaeology is really all about the material record: tracing what has survived in the soil, and building theories top of that. Many of these theories concern what people did, and where and how they moved while they were doing them. During a capture session in Bedford last week (which alas I couldn’t attend), the team tried out various scenarios in the Animazoo mocap suits, using the 3D Silchester Round House created by Leon, Martin and others as a backdrop. They reconstructed in a practical way how certain every day tasks might have been accomplished by the Iron Age inhabitants. As Mike Fulford pointed out yesterday, such reconstructions – which are not reconstructions in the normally accepted sense in archaeology, where the focus is usually on the visual, architectural and formal remediation of buildings (as excellently done already by the Silchester project) – themselves can be powerful stimuli for archaeological research questions. He cited a scene in Kevin Macdonald’s The Eagle, where soldiers are preparing for battle. This scene prompted the reflection that a Roman soldier would have found putting on his battle dress a time consuming and laborious process, a fact which could in turn be pivotal to the interpretation of events surrounding various aspects of Roman battles.

One aim of MiPP is to conceptualize theoretical scenarios such as this as visual data comprising digital motion traces. The e-research interest in this is that those traces cannot really be called ‘data’, and cannot be useful in the particular application area of reconstructive archaeology, if their provenance is not described, or if they are not tagged systematically and stored as retrievable information objects. What we are talking about, in other words, is the mark-up of motion traces in a way that makes them reusable. Our colleagues in the digital humanities have been marking up texts for decades. The TEI has spawned several subsets for specific areas, such as EpiDoc for marking up epigraphic data, and mark-up languages for 3D modelling (e.g. VRML) are well developed. Why then should there not be a similar schema for motion traces? Especially against the background of a field such as archaeology, where there are already highly developed information recording and presentation conventions, marking up quantitative representations of immaterial events should be easy. One example might be to assign levels of certainty to various activities, in much the same way that textual mark-up allows editors to grade the scribal or editorial certainty of sections of text. We could then say, for example, that ‘we have 100% certainty that there were activities to do with fire in this room because there is a hearth and charring, but only 50% certainty that the fire was used for ritual activity’. We could also develop a system for citing archaeological contexts in support of particular types of activity; in much the same way that the LEAP project cited Silchester’s data in support of a scholarly publication. It boils down to the fundamental principle of information science, that an information object can only be useful when its provenance is known and documented. How this can be approached for motion traces of what might have happened at Silchester in the first century AD promises to be a fascinating case study.