Corpse roads

In the last few years, I have been gathering information on early-modern ideas and folklore around so-called “corpse roads”, which date from before such things as metalled transport networks and the Enclosures. When access to consecrated burial grounds was deliberately limited by a Church wanting to preserve its authority (and burial fees), an inevitable consequence was that people had to transport their dead, sometimes over long distances, for interment. A great deal of superstition and “fake news” grew up around some of these routes, for example – as I shall be blogging shortly – the belief that any route taken by a bier party over private land automatically became a public right of way. They seem to have had a particular significance in rural communities in the North West of England, especially Cumbria.

The idea of the corpse road is certainly an old one. In A Midsummer Night’s Dream, Puck soliloquizes: Now it is the time of night/That the graves all gaping wide/Every one lets forth its sprite/In the church-way paths to glide.

In my view, corpse roads – although undoubtedly a magnet for the eccentric and the off-the-wall – are a testimony to the imaginative power of physical progress through the landscape at crux points in life (and death), and of the kinds of imperatives which drove connections through those landscapes. As Ingold might say, they are very particular form of “task-scape”. I am interested in why they became important enough, at least to some people, for Shakespeare to write about them.

Here is a *very* early and initial  dump of start and finish points of corpse roads that I’ve been able to identify, mostly in secondary literature. I hope to be able to rectify/georeference each entry more thoroughly as and where time allows.

 

(Not quite) moving mountains: recording volcanic landscapes in digital gazetteers

Digital gazetteers have been immensely successful as means of linking and describing places online. GeoNames for example, now contains 10,000,000 geographical names corresponding to over 7,500,000 unique features. However, as we will be outlining at the ICA Digital Technologies in Cartographic Heritage next month in relation to the Heritage Gazetteer of Cyprus project, one assumption which often underlies them is fixity: an assumption that a name and a place and, by extension, its location on the Earth’s surface are immutably linked. This allows gazetteers to be treated as authorities. For example, a gazetteer with descriptions fixed to locations can be used to associate postal codes with a layer of contemporary environmental data and describe relationships between them; or to create a look-up list for the provision of services. It can also be very valuable for research, where a digital edition of a text has mentions of places. If contained in a parallel gazetteer, these can be used to provide citations and external authorities to those places, and also to other references in other texts.

However, physical geography changes. In the Aegean, where  the African tectonic plate is subducting beneath the Eurasian plate to the north, the South Aegean Volcanic Arc has been created, a band of active and dormant volcanic islands including the islands of  Aegina, Methana, Milos, Santorini, Kolumbo, and Kos, Nisyros and Yali. Each of these locations has a fixed modern aspect, and can be related to a record in a digital gazetteer.  However, these islands have changed over the years as a result of historical volcanism, and this history requires  the flexibility of a digital gazetteer to adequately represent it.

thera2

The island of Thera. The volcanic dome of Mt. Profitis Elias is shown viewed from the north.

I recently helped refine the entry in the Pleiades gazetteer for the Santorini Archipelago. Pleiades assigns a URI to each location, and allows information to be associated with that location via the URIs.  Santorini provides a case study of how multiple Pleaides URIs, associated with different time periods, can trace the history of the archipelago’s volcanism.  The  five present-day islands frame two ancient calderas, the larger formed more recently in the great Late Bronze Age eruption, and the other formed very much earlier in the region’s history. Originally, it is most likely that a single island was present, which fragmented over the millennia in response to the eruptions. Working backwards therefore, we begin with a URI for the islands as a whole:  http://pleiades.stoa.org/places/808255902. This covers the entire entity of the ‘Santorini Archipelago’. We associate all the names that have pertained to the island group through history – Καλλιστη (Calliste; 550 BC – 330 BC); Hiera (550 BC – 330 BC) and Στρογγύλη (Strongyle; AD 1918 – AD 2000), as well as the modern designation ‘Santorini Archipelago’ itself.  These four names have been used, at different times as either a collective term for all the islands, or, in the case of Strongyle, for the (geologically) original single island. This URI-labelled entity has lower-level connections with the islands that were formed during the periods of historic volcanism:  Therasia, Thera,  Nea Kameni, Mikro Kameni, Palea Kameni, Caimeni  and Aspronisi. Each, in turn, has its own URI.

thera1

The Santorini Archipelago in Pleiades

Mikro Kameni  and Caimeni are interesting cases as they no longer exist on the modern map. They are attested respectively by the naval survey of Thomas Graves of HMS Volgae (1851), and Caimeni was attested by Thomaso Porcacchi in 1620. Both formed elements of what are now the Kameini islands, but due to the fact that they have these distinct historical attestations, they are assigned URIs, with the time periods when they were known to exist according to the sources, even though they do not exist today.

This speaks to a wider issue of digital gazetteers, and their role in the understanding of past landscapes. With the authority they imbue to place-names, gazetteers might, if developed without reference to the nuances of landscape changes over time, potentially risk implicitly enforcing the view, no longer widely accepted, that places are immutable holders of history and historic events; where, in the words of Tilley in A Phenomenology of Landscape: Places, Paths and Monuments (1994), ‘space is directly equivalent to, and separate from time, the second primary abstracted scale according to which societal change could be documented and ‘measured’.’ (p. 9). The evolution of islands due to volcanism show clearly the need for a critical framework that avoids such approaches to historical and archaeological spaces.

Reconstruction, visualization and frontier archaeology

Recently on holiday in the North East, I took in two Roman forts of the frontier of Hadrian’s Wall, Segedunum and Arbeia. Both have stories to tell, narratives, about the Roman occupation of Britain, and in the current period both have been curated in various ways. At both, the curating authorities (Tyne and Wear Museums), with ongoing archaeological research being undertaken by the fantastic WallQuest community archaeology project.

The public walkthrough reconstructions of what the buildings and the contents might have been like at both sites pose some interesting questions about the nature of historical/archaeological narratives, and how they can be elaborated. At Segedunum, there is a reconstruction of a bath house. Although the fort itself had such a structure, modern development means that it is not in the same place, nor does the foundations of the reconstruction relate directly to archaeological evidence. The features of the bath house are drawn from composite analysis of bath houses from throughout the Roman Empire. So what we have here is a narrative, but it is a generic narrative: it is stitched together, generalized, a mosaic of hundreds of disparate narratives, but it can only be very loosely constrained by time (a bath house such as that at Segedunum would have had a lifespan of 250-300 years), and not to any one individual. we cannot tell the story of any one Roman officer or auxiliary solider who used it.

Image
Reconstructed bath house at Segedunum

On the other hand at Arbeia, there are three sets of granaries, the visible foundations all nicely curated and accessible to the public. You can see the stone piers and columns that the granary floors were mounted on, to allow air movement to stop the grain rotting. Why three granaries for a fort of no more than 600 occupants? Because in the third century, the Emperor Severus wanted to conquer the nearby Caledonii; and for his push up into Scotland we needed a secure supply base with plenty of grain.

Image
Granaries at Arbeia, reconstructed West Gatehouse in the background

This is an absolute narrative. It is constrained by actual events which are historical and documented. At the same fort is a reconstructed gateway, which is this time situated on actual foundations. This is an inferential narrative, with some of the gateway’s features being reconstructed again from composite evidence from elsewhere (did it have two or three stories? A shingled roof? We don’t know, but we infer). These narratives are supported by annotated scale models in the gateway structure which we, they paying public (actually Arbeia is free), can view and review at our leisure. This speaks to the nature of empirical, inferential and conjectural reconstruction detailed in a forthcoming book chapter by myself and Kirk Woolford (of contributions to the EVA conference, published by Springer).

Narratives are personal, but the can also be generic. In some ways this speaks back to the concept of the Deep Map (see older posts). The walkthrough reconstruction constitutes, I think, half a Deep Map. It provides a full sensory environment, but is not ‘scholarly’ in that it does not elucidate what it would have been like for a first or second century Roman, or auxiliary soldier to experience the environment. Maybe the future of 3D visualization should be to integrate modelling, reconstruction, remediation, and interpretation to bring available (and reputable) knowledge from whatever source about what that original sensory experience would have been – texts, inscriptions, writing tablets, environmental archaeology, experimental archaeology etc. In other words, visualization should no longer be seen as a means of making hypothetical visual representations of what the past might of been, but of integrating knowledge about the experience of the environment derived from all five senses, but using vision as the medium.  It can never be a total representation incorporating all possible experiences under all possible environmental conditions, but then a map can never be a total representation of geography (except, possibly, in the world of Borges’s On the Exactitude of Science).

Deep maps in Indy

I am here in a very hot and sunny Indianapolis trying to figure out what is meant by deep mapping, with an NEH Summer Institute at UIPUI hosted by the Polis Center here. There follows a very high-level attempt to synthesize some thoughts from the first week.

Deep mapping – we think, although we’ll all probably have changed our minds by next Friday, if not well before  – is about representing (or, as I am increasingly preferring to think, remediating) the things that Ordnance Survey would, quite rightly, run a perfectly projected and triangulated mile from mapping at all. Fuzziness. Experience. Emotion. What it means to move through a landscape at a particular time in a particular way. Or, as Ingold might say, to negotiate a taskscape. Communicating these things meaningfully as stories or arguments. There has been lots of fascinating back and forth about this all week, although – and this is the idea at least – next week we move a beyond the purely abstract and grapple with what it means to actually design one.

If we’re to define the meaning we’re hoping to build here, it’s clear that we need to rethink some pretty basic terms. E.g. we talk instinctively about ‘reading’ maps, but I have always wondered how well that noun and that verb really go together. We assume that ‘deep mapping’ for the humanities – a concept which we assume will be at least partly online – has to stem from GIS, and that a ‘deep map, whatever we might end up calling that, will be some kind of paradigm shift beyond ‘conventional’ computer mapping. But the ’depth’ of a map is surely a function of how much knowledge – knowledge rather than information – is added to the base layer, where that information comes from, and how it is structured. The amazing HGIS projects we’ve seen this week give us the framework we need to think in, but the concept of ‘information’ therein should surely be seen as a starting point. The lack of very basic kinds of such information in popular mapping applications has been highlighted, and perhaps serves to illustrate this point. In 2008, Mary Spence, President of the British Cartographic Society, argued in a lecture:

Corporate cartographers are demolishing thousands of years of history, not to mention Britain’s remarkable geography, at a stroke by not including them on [GPS] maps which millions of us now use every day. We’re in danger of losing what makes maps so unique, giving us a feel for a place even if we’ve never been there.

To put it another way, are ‘thin maps’ really all that ‘thin’, when they are produced and curated properly according to accepted technical and scholarly standards? Maps are objects of emotion, in a way that texts are not (which is not to deny the emotional power of text, it is simply to recognize that it is a different kind of power). Read Mike Parker’s 2009 Map Addict for an affectionate and quirky tour of the emotional power of OS maps (although anyone with archaeological tendencies will have to grit their teeth when he burbles about the mystic power of ley lines and the cosmic significance of the layout of Milton Keynes). According to Spence, a map of somewhere we have never been ties together our own experiences of place, whether absolute (i.e. georeferenced) or abstract, along with our expectations and our needs. If this is true for the lay audiences of, say the Ordnance Survey, isn’t the vision of a deep map articulated this past week some sort of scholarly equivalent? We can use an OS map to make a guess, an inference or an interpretation (much discussion this week has, directly or indirectly, focused on these three things and their role in scholarly approaches). What we cannot do with an OS map is annotate or embed it with any of these. The defining function of a deep map, for me, is an ability to do this, as well as the ability to structure the outputs in a formal way (RDF is looking really quite promising, I think – if you treat different mapped objects in the object-subject-predicate framework, that overcomes a lot of the problems of linearity and scale that we’ve battled with this week). The different levels of ephemerality that this would mean categorising (or, heaven help us, quantifying) is probably a story for another post, but a deep map should be able to convey experience of moving through the landscape being described.

There are other questions which bringing such a map into the unforgiving world of scholarly publication would undoubtedly entail. Must a map be replicable? Must someone else be able to come along and map the same thing in the same way, or at least according to their own subjective experience(s)?  In a live link up the UCLA team behind the Roman Forum project demonstrated their stuff, and argued that the visual is replicable and –relatively easily – publishable, but of course other sensory experiences are not.  We saw, for example, a visualisation of how far an orator’s voice could carry. The visualisation looks wonderful, and the quantitative methodology even more so, but to be meaningful as an instrument in the history of Roman oratory, one would have to consider so many subjective variables – the volume of the orator’s voice (of course), the ambient noise and local weather conditions (especially wind). There are even less knowable functions, such as how well individuals in the crowd could hear, whether they had any hearing impairments etc. This is not to carp –after all, we made (or tried to make) a virtue of addressing and constraining such evidential parameters in the MiPP project, and our outputs certainly looked nothing like as spectacular as UCLA’s virtual Rome – but a deep map must be able to cope with those constraints.

To stand any chance of mapping them, we need to treat such ephemera as objects, and object-orientation seemed to be where our – or at least my – thinking was going last week. And then roll out the RDF…

CAA1 – The Digital Humanities and Archaeology Venn Diagram

The question  ‘what is the digital humanities’ is hardly new; nor is discussion of the various epistemologies of which the digital humanities are made. However, the relationship which archaeology has with the digital humanities – whatever the epistemology of either – has been curiously lacking. Perhaps this is because archaeology has such strong and independent digital traditions, and such a set of well-understood quantitative methods, that the close analysis of of those traditions – familiar to readers of Humanist, say –  seem redundant. However, at the excellent CAA International conference in Southampton last week, there was a dedicated round-table session on the ‘Digital Humanities/Archaeology Venn Diagram’, in which I was a participant. This session highlighted that the situation is far more nuanced and complex that it first seems. As is so often the case with digital humanities.

A Venn Diagram, of course, assumes two or more discrete groups of objects, where some objects contain the attributes of only one group, and others share attributes of multiple groups. So – assuming that one can draw a Venn loop big enough to contain the digital humanities – what objects do they share with archaeology? As I have not been the first to point out, digital humanities is mainly concerned with methods. This, indeed, was the basis of Short and McCarty’s famous diagram. The full title of CAA – Computer Applications and Quantitative Methods in Archaeology – suggests that a methodological focus is one such object shared by both groups. However unlike the digital humanities, archaeology is concerned with a well defined set of questions. Most if not all, of these questions derive from ‘what happened in the past?’. Invariably the answers lie, in turn, in a certain class of material; and indeed we refer to collectively to this class as ‘material culture’.  And digital methods are a means that we use to the end of getting at the knowledge that comes from interpretation of material culture.

The digital humanities have much broader shared heritage which, as well as being methodological, is also primarily textual. This fact is illustrated by the main print publication in the field being called Literary and Linguistic Computing. It is not, I think, insignificant as an indication of how things have moved on that that a much more recently (2007)  founded journal has the less content-specific title Digital Humanities Quarterly. This, I suspect, is related to the reason why digitisation so often falls between the cracks in the priorities of funding agencies: there is a perception that the world of printed text is so vast that trying to add to the corpus incrementally would be like painting the Forth Bridge with a toothbrush (although this doesn’t affect my general view that the biggest enemy of mass digitisation today is not FEC or public spending cuts, but the Mauer im Kopf that form notions of data ownership and IPR). The digital humanities are facing a tension, as they always have, between variable availability of digital material, and the broad access to content that any porting over to the ‘digital’ that the word ‘humanities’ implies. As Stuart Jeffrey’s talk in the session made clear, the questions facing archaeology are more about what data archaeologists throw away: the emergence of Twitter, for example, gives an illusion of ephemerality, but every tweet adds to the increasing cloud of noise on the internet; and those charged with preserving the archaeological record in digital form must decide where where the noise ends and the record begins.

There is also the question of what digital methods *do* to our data. Most scholars who call themselves ‘digital humanists’ would reject the notion that textual analysis, which begins with semantic and/or stylometric mark-up is a purely quantitative exercise; and that qualitative aspects of reading and analysis arise from, and challenge, the additional knowledge which is imparted to a text in the course of encoding by an expert. However, as a baseline, it is exactly the kind of quantitative  reading of primary material which archaeology – going back to the early 1990s – characterized as reductionist and positivist. Outside the shared zone of the Venn diagram, then, must be considered the notions of positivism and reductionism: they present fundamentally different challenges to archaeological material than they do to other kinds of primary resource, certainly including text, but also, I suspect, to other kinds of ‘humanist’ material as well.

A final point which emerged from the session is the disciplinary nature(s) of archaeology and the digital humanities themselves. I would like to pose the question as to why the former is often expressed as a singular noun whereas the latter is a plural. Plurality in ‘the humanities’ is taken implicitly. It conjures up notions of a holistic liberal arts education in the human condition, taking in the fruits of all the arts and sciences in which humankind has excelled over the centuries. But some humanities are surely more digital than others. Some branches of learning, such as corpus linguistics, lend themselves to quantitative analysis of their material. Others tend towards the qualitative, and need to be prefixed by correspondingly different kinds of ‘digital’. Others are still more interpretive, with their practitioners actively resisting ‘number crunching’. Therefore, instead of being satisfied with ‘The Digital Humanities’ as an awkward collective noun, maybe we could look to free ourselves of the restrictions of nomenclature by recognizing that can’t impose homogeneity, and nor should we try to. Maybe we could even extend this logic, and start thinking in terms of ‘digital archaeologies’; of branches of archaeology which require (e.g.) archiving, communication, semantic web, UGC and so on; and some which don’t require any.  I can’t doubt that the richness and variety of the conference last week is the strongest argument possible for this.

Digital Ghosts

Here’s a preview of my upcoming talk at the Turing Festival in Edinburgh.

Credit: Motion in Place Platform Project

3D imaging is prevalent in archaeology and cultural heritage. From the Roman forum to Cape Town harbour, from the crypts of Black Sea churches to the castles of Aberdeenshire, 3D computer graphic models of ancient buildings and ancient spaces can be explored, manipulated and flown-through from our desktops. At the same time however, it is a basic fact of archaeological practice that understanding human movement constitutes a fundamental part of the interpretive process, and of any interpretation of a site’s use in the past. Yet most of these digital reconstructions, and the ones we see in archaeological TV programmes, in museums, in cultural heritage sites and even in Hollywood movies, tend to focus on the architecture, the features, and the physical surroundings. It is almost paradoxical that the major thing missing from many of our attempts to reconstruct the human past digitally are humans. This can be traced to obvious factors of preservation and interpretation: buildings survive, people don’t. However, this has not stopped advances in 3D modelling, computer graphics and web services to support 3D images from drawing archaeologists and custodians of cultural heritage further and further into the 3D world, and reconstructing ancient 3D environments in greater and greater detail. But the people are still left behind. This talk will reflect on the Motion in Place Platform (MiPP) project, which seeks to use motion capture hardware and data to test human responses and actions within VR environments, and their real-world equivalents. Using as a case study domestic spaces – roundhouses of the Southern British Iron Age – it used motion capture to compare human reaction and perception in buildings reconstructed at 1:1 scale, with ‘virtual’ buildings projected on to screens. This talk will outline the experiment, what might be learned from it, and how populating our 3D views of the past with ‘digital ghosts’ can also inform them, and make them more useful for drawing inferences about the past.

End of project MiPP workshop

At the closing MiPP project in Sussex last week. Due to a concatenation of various cirumstances, I had to take a large broomstick, which will be used in next week’s motion capture exercises at Butser Farm and in Sussex, on a set of trains from Reading, via the EVA London 2011 conference in Central London, to the workshop in Falmer, Sussex. Given this thing is six feet tall and required its own train seat (see picture), I got a variety of looks from my fellow passengers, especially on the Underground, ranging from suspicion to pity to humour. Imagined how one might have handled a conversation: ‘There’s a logical explanation. Yes, it’s going to be used as a prop in an experiment to test the environment of Iron Age round houses in cyberspace versus the real thing in the present day.’ ‘Oh yes? And that’s your idea of a logical explanation is it?’

Of course I could have really freaked people out be getting off the train at Gatwick Airport and wandering around the terminal, asking for directions to the runway.

As with the entire MiPP project, the workshop was highly interdisciplinary. A varied set of presentations included ones from Bernard Frisher of the University of Virginia, on digital representation of sculpture, and from colleagues at Southampton on the fantastic PATINA project. All of which coalesced  around questions of process, and how we represent it. Tom Frankland’s presentation on studying archaeological processes, including such offsite considerations as the difference between note taking in the lab and in the field, filled in numerous gaps of documentation that our work at Silchester last summer left.

When I got to my feet on day to two present, I veered slightly off my promised topic (as with most presentations I have ever given) and elected instead to reflect on the nature remediated archaeological objects. I would suggest that there is a three-way continuum on which any digital object representing an archaeological artefact or process may be plotted: the empirical, the interpretive and the conjectural. An empirical statement, such as Dr. Peter Reynolds, the founder of Butser Farm would have approved, might state that ‘the inner ring of this round house comprised of twelve upright posts, because we can discern twelve post holes in ring formation’.  An interpretative conclusion might be built on top of this stating that, because ceramic sherds were found in the post hole, cooking and/or eating took place near to this inner ring. This could in turn lead to a conjecture that a particular kind of meat was cooked in a particular way at this location, based not on interpretation or empirical evidence immediately to hand, but on the general context of the environment, and on what is known more broadly about Iron Age domestic practice.

More on all this next week, after capture sessions at Butser.

Digital Classicist: Developing a RTI System for Inscription Documentation in Museum Collections and the Field

In the first of this summer’s Digital Classicist Seminar Series, Kathryn Piquette and Charles Crowther of Oxford discussed Developing a Reflectance Transformation Imaging (RTI) System for Inscription Documentation in Museum Collections and the Field: Case studies on ancient Egyptian and Classical material . In a well-focused discussion on the activities of their AHRC DEDEFI project of (pretty much) this name, they presented the theory behind RTI and several case studies.

Kathryn began by setting out the limitations of existing imaging approaches in documenting inscribed material. These include first hand observation, requiring visits to archives sites, museums etc. Advantages are that the observer can also handle the object, experiencing texture, weight etc. Much information can be gathered from engaging first hand, but the costs are typically high and the logistics complex. Photography is relatively cheap and easy to disseminate as a surrogate, but it fixed light position one is stuck with often means important features are missed. Squeeze making overcomes this problem, but you lose any sense of the material, and do not get any context. Tracing has similar limitations, but there is the risk of other information being filtered out. Likewise line drawings often miss erasures, tool marks etc; and are on many occasions not based on the original artefact anyway, which risks introducing errors. Digital photography has the advantage of being cheap and plentiful, and video cann capture people engaging with objects. Laser scanning resolution is changeable, and some surfaces do not image well. 3D printing is currently in its infancy. The key point is that all such representations are partial, and all impose differing requirements when one comes to analyse and interpret inscribed surfaces. There is therefore a clear need for fuller documentation of such objects.

Shadow stereo has been used by this team in previous projects to analyse wooden Romano British writing tablets. These tablets were written on wax, leaving tiny scratches in the underlying wood. Often reused, the scratches can be made to reveal multiple writings when photographed in light from many directions. It is possible then to build algorithmic models highlighting transitions from light to shadow, revealing letterforms not visible to the naked eye. The RTI approach used in the current project was based on 76 lights on the inside of a dome placed over the object. This gives a very, very high definition rendering of the object’s surface in 3D, exposed consistently by light from every angle. This ‘raking light photography’ takes images taken from different locations with a 24.5 megapixel camera, and the multiple captures are combined. This gives a sense not only of the objects surface, but of its materiality: by selecting different lighting angles, one can pick out tool marks, scrape marks, fingerprints and other tiny alterations to the surface. There are various ways of enhancing the images, all of which are suitable for identifying different kinds of feature. Importantly, as a whole, the process is learnable by people without detailed knowledge of the algorithms underlying the image process. Indeed one advantage of this approach is it is very quick and easy – 76 images can be taken in around in around five minutes. At present, the process cannot handle large inscriptions on stone, but as noted above, the highlight RTI allows more flexibility. In one case study, RTI was used in conjunction with a flatbed scanner, giving better imaging of flat text bearing objects. The images produced by the team can be viewed using an open source RTI viewer, with an ingenious add-on developed by Leif Isaksen which allows the user to annotate and bookmark particular sections of images.

The project has looked at several case studies. Oxford’s primary interest has been in inscribed text bearing artefacts, Southampton’s in archaeological objects. This raises interesting questions about the application of a common technique in different areas: indeed the good old methodological commons comes to mind. Kathryn and Charles discussed two Egyptian case studies. One was the Protodynastic Battlefield Palette. They showed how tools marks and making processes could be elicited from the object’s surface, and various making processes inferred. One extremely interesting future approach would be to combine RTI with experimental archaeology: if a skilled and trained person were to create a comparable artefact, one could use RTI to compare the two surfaces. This could give us deeper understanding about the kind of experiences involved in making an object such as the battlefield palette, and to base that understanding on rigorous, quantitative methodology.

It was suggested in the discussion that a YouTube video of the team scanning an artefact with their RTI dome would be a great aid to understanding the process. It struck me, in the light of Kathryn’s opening critique of the limitations of existing documentation, that this implicitly validates the importance of capturing people’s interaction with objects: RTI is another kind of interaction, and needs to be understood accordingly.

Another important question raised was how one cites work such as RTI. Using a screen grab in a journal article surely undermines the whole point. The annotation/bookmark facility would help, especially in online publications, but more thought needs to be given to how one could integrate information on materiality into schema such as EpiDoc. Charlotte Roueche suggested that some tag indicating passages of text that had been read using this method would be valuable. The old question of rights also came up: one joy of a one-year exemplar project is that one does not have to tackle the administrative problems of publishing a whole collection digitally.

MiPP: Forming questions

The question about our MiPP project which I’m most often asked is ‘why?’ In fact that this is the whole project’s fundamental research question. As motion capture technologies become cheaper, more widely available, less dependent on equipment in fixed locations such as studios, and less dependent on highly specialist technical expertise to set them up and use them, what benefits can these technologies bring outside their traditional application areas such as performance and medical practice? What new research can they support? In such a fundamentally interdisciplinary project, there are inevitably several ‘whys’, but as someone who is, or at least once was, an archaeologist, archaeology is the ‘why’ that I keep coming back to. Matters became a lot clearer, I think, in a meeting we had yesterday with some of the Silchester archaeological team.

As I noted in my TAG presentation before Christmas, archaeology is really all about the material record: tracing what has survived in the soil, and building theories top of that. Many of these theories concern what people did, and where and how they moved while they were doing them. During a capture session in Bedford last week (which alas I couldn’t attend), the team tried out various scenarios in the Animazoo mocap suits, using the 3D Silchester Round House created by Leon, Martin and others as a backdrop. They reconstructed in a practical way how certain every day tasks might have been accomplished by the Iron Age inhabitants. As Mike Fulford pointed out yesterday, such reconstructions – which are not reconstructions in the normally accepted sense in archaeology, where the focus is usually on the visual, architectural and formal remediation of buildings (as excellently done already by the Silchester project) – themselves can be powerful stimuli for archaeological research questions. He cited a scene in Kevin Macdonald’s The Eagle, where soldiers are preparing for battle. This scene prompted the reflection that a Roman soldier would have found putting on his battle dress a time consuming and laborious process, a fact which could in turn be pivotal to the interpretation of events surrounding various aspects of Roman battles.

One aim of MiPP is to conceptualize theoretical scenarios such as this as visual data comprising digital motion traces. The e-research interest in this is that those traces cannot really be called ‘data’, and cannot be useful in the particular application area of reconstructive archaeology, if their provenance is not described, or if they are not tagged systematically and stored as retrievable information objects. What we are talking about, in other words, is the mark-up of motion traces in a way that makes them reusable. Our colleagues in the digital humanities have been marking up texts for decades. The TEI has spawned several subsets for specific areas, such as EpiDoc for marking up epigraphic data, and mark-up languages for 3D modelling (e.g. VRML) are well developed. Why then should there not be a similar schema for motion traces? Especially against the background of a field such as archaeology, where there are already highly developed information recording and presentation conventions, marking up quantitative representations of immaterial events should be easy. One example might be to assign levels of certainty to various activities, in much the same way that textual mark-up allows editors to grade the scribal or editorial certainty of sections of text. We could then say, for example, that ‘we have 100% certainty that there were activities to do with fire in this room because there is a hearth and charring, but only 50% certainty that the fire was used for ritual activity’. We could also develop a system for citing archaeological contexts in support of particular types of activity; in much the same way that the LEAP project cited Silchester’s data in support of a scholarly publication. It boils down to the fundamental principle of information science, that an information object can only be useful when its provenance is known and documented. How this can be approached for motion traces of what might have happened at Silchester in the first century AD promises to be a fascinating case study.