More on Ancient Itineraries

CachedImage
Image derived from Gerrit Dou, Astronomer by Candlelight, late 1650s. Oil on panel. The J. Paul Getty Museum

I have recently re-posted here the call for members for the new Institute programme, “Ancient Itineraries”, funded by the Getty Foundation as part of its Digital Art History initiative and led by DDH and collaborating with KCL’s Classics department and Humlab at Umea, which seeks to map out some possible futures for digital art history. We will do this by convening two meetings of international experts, one in London and one in Athens. The posting has generated some discussion, both on the listservs to which we’ve posted and privately.  “What’s the relationship”, asked one member of the community, “between this and the Getty Provenance Index and other initiatives in this area, such as Linked Pasts?”. Another asks if we are seeking professors, ECRs, faculty, or curators? These are good questions, and I seek to answer them, in general terms, here.

In practical terms, he project is funded by the Getty Foundation, the philanthropic arm of the Getty Trust. The Getty Provenance Index is a resource developed and managed by the Getty Research Institute (GRI).  (Briefly, there are four programs that rest under the umbrella of the Getty Trust: the GRI, the Getty Foundation, the Getty Conservation Institute, and the Getty Museum. Each program works independently and has a particular mandate under the broader mission of the Getty Trust.)

A word should be devoted to the Proof of Concept (PoC). A key strand of the project will be a collaboration with King’s Digital Lab to develop an exemplum of the kind of service or resource that the art history community might find useful. The programme gives us unprecedented space to explore the methodological utility of digital methods to art history, so it is not only logical, but incumbent upon us, to try and operationalize those methods in a practical manner. The PoC will therefore be one of the main outputs of the project.  However, it must be stressed that the emphasis of the project, and the bulk of its efforts, will go into defining the question(s) which make it important. What are the most significant challenges which art history (without the “digital” prefix) currently faces and which can be tackled using digital tools and services?

There are many excellent examples of tools, services and infrastructure which already address a variety of scholarly challenges in this space – Pelagios, which enables the semantic annotation of text, aggregates data from different sources, and provides a platform for linking them together is an obvious example. As is the Pleiades gazetteer, which provides stable URIs for individual places in the ancient world, and frames much current thinking about representing the idea of (ancient) place on the WWW. Arachne, an initiative of the DAI, is another one, which links art/archaeological objects and their descriptions using catalogue metadata. These infrastructures, and the communities behind them, actually *do* things with datasets. They combine datasets and put them together; inspiring new questions, and answering old ones. from a technical point of view, our project will not be remotely of the scale or ambition of these. Rather, our motivation is to survey and reflect on key initiatives and technologies in this space, discuss their impact, and explore their relationship – or possible relationships – with methods and theories long practiced by art historians who have little or no connection with such tools.

What of the chronological scope? As we intimate in the call, digital gazetteers, visualization, the use of Semantic Web technologies to link datasets such as catalogue records, have a long pedigree of being applied within the sphere of the Classical world, very broadly conceived as the orbit of Greece and Rome between the fifth century BCE and the fourth century CE. At least in part, this can be traced back to the deep impact of Greco-Roman traditions on Western culture and society, and its manifestation in the present day – a topic explored by the current exhibition at KCL, “Modern Classicisms and The Classical Now”. Because of this, those traditions came in to early contact with scientific cartography (Ortelius first mapped the Roman empire in the 1580s) and the formal information structures of (Western) museum catalogues.  The great interest in the art and culture of this period continues to the present day, and helps explain its intensive intellectual interest to scholars of the digital humanities – resulting in a rich seam of projects and infrastructures fleetingly outlined above.

Our motivation in Ancient Itineraries is to ask what the wider field of art history can learn from this, and vice versa. Many of the questions of space, trajectory and reception that we might apply to the work of Phidias, for example, might apply to the work of other sculptors, and later traditions. Ancient Itineraries will seek to take to the myriad digital tools that we have for exploring Phidias’s world and work, and take them into theirs. The programme will give us space to review what the art-historical strands of digital classics are, and what they have already contributed to the wider area. However we will also ask what technology can and cannot achieve, and explore its wider application. Therefore, while art-historical Classicists are certainly welcome and may stand to gain most; those with interests in the art of other periods can certainly contribute – so long as they are sure they could benefit from deepening their historical and/or critical understanding of that tradition using digital praxes.

Call for members: Major new Institute opens at King’s College London with Getty Foundation support

The Project

The 18-month Institute in Digital Art History is led by King’s College London’s Department of Digital Humanities (DDH) and Department of Classics, in collaboration with HumLab at the University of Umeå, with grant support provided by the Getty Foundation as part of its Digital Art History initiative.

It will convene two international meetings where Members of the Institute will survey, analyse and debate the current state of digital art history, and map out its future research agenda. It will also design and develop a Proof of Concept (PoC) to help deliver this agenda. The source code for this PoC will be made available online, and will form the basis for further discussions, development of research questions and project proposals after the end of the programme.

To achieve these aims we will bring together leading experts in the field to offer a multi-vocal and interdisciplinary perspective on three areas of pressing concern to digital art history:

●       Provenance, the meta-information about ancient art objects,

●       Geographies, the paths those objects take through time and space, and

●       Visualization, the methods used to render art objects and collections in visual media.

Current Digital Humanities (DH) research in this area has a strong focus on Linked Open Data (LOD), and so we will begin our exploration with a focus on LOD. This geographical emphasis on the art of the ancient Mediterranean world will be continued in the second meeting to be held in Athens. The Mediterranean has received much attention from both the Digital Classics and DH communities, and is thus rich in resources and content. The programme will, therefore, bring together two existing scholarly fields and seek to improve and facilitate dialogue between them.

We will assign Members to groups according to the three areas of focus above. These groups will be tasked with producing a detailed research specification, detailing the most important next steps for that part of the field, how current methods can best be employed to make them, and what new research questions the participants see emerging.

The meetings will follow a similar format, with initial participant presentations and introductions followed by collaborative programme development and design activities within the research groups, including scoping of relevant aspects of the PoC. This will be followed by further discussion and collaborative writing which will form the basis of the event’s report. Each day will conclude with a plenary feedback session, where participants will share and discuss short reports on their activities. All of the sessions will be filmed for archival and note-taking purposes, and professional facilitators will assist in the process at various points.

The scholarly outputs, along with the research specifications for the PoC, will provide tangible foci for a robust, vibrant and sustainable research network, comprising the Institute participants as a core, but extending across the emerging international and interdisciplinary landscape of digital art history. At the same time, the programme will provide participants with support and space for developing their own personal academic agendas and profiles. In particular, Members will be encouraged to and offered collegial support in developing publications, both single- and co-authored following their own research interests and those related to the Institute.

 

The Project Team

The core team comprises of Dr Stuart Dunn (DDH), Professor Graeme Earl(DDH) and Dr Will Wootton (Classics) at King’s College London, and Dr Anna Foka of HumLab, Umeå University.

They are supported by an Advisory Board consisting of international independent experts in the fields of art history, Digital Humanities and LOD. These are: Professor Tula Giannini (Chair; Pratt Institute, New York), Dr Gabriel Bodard (Institute of Classical Studies), Professor Barbara Borg (University of Exeter), Dr Arianna Ciula (King’s Digital Laboratory), Professor Donna Kurtz (University of Oxford), and Dr Michael Squire (King’s College London).

 

Call for participation
We are now pleased to invite applications to participate as Members in the programme. Applications are invited from art historians and professional curators who (or whose institutions) have a proven and established record in using digital methods, have already committed resources, or have a firm interest in developing their research agendas in art history, archaeology, museum studies, and LOD. You should also be prepared to contribute to the design of the PoC (e.g. providing data or tools, defining requirements), which will be developed in the timeframe of the project by experts at King’s Digital Lab.

Membership is open to advanced doctoral students (provided they can demonstrate close alignment of their thesis with the aims of the programme), Faculty members at any level in all relevant fields, and GLAM curation professionals.

Participation will primarily take the form of attending the Institute’s two meetings:

King’s College London: 3rd – 14th September 2018

Swedish Institute at Athens: 1st-12th April 2019

We anticipate offering up to eighteen places on the programme. All travel and accommodation expenses to London and Athens will be covered. Membership is dependent upon commitment to attend both events for the full duration.

Potential applicants are welcome to contact the programme director with any questions: stuart.dunn@kcl.ac.uk.

To apply, please submit a single A4 PDF document set out as follows. Please ensure your application includes your name, email address, institutional affiliation, and street address.


Applicant Statement (ONE page)
This should state what you would bring to the programme, the nature of your current work and involvement of digital art history, and what you believe you could gain as a Member of the Institute. There is no need to indicate which of the three areas you are most interested in (although you may if you wish); we will use your submission to create the groups, considering both complementary expertise and the ability for some members to act as translators between the three areas.

Applicant CV (TWO pages)
This section should provide a two-page CV, including your five most relevant publications (including digital resources if applicable).

Institutional support (ONE page)
We are keen for the ideas generated in the programme to be taken up and developed by the community after the period of funding has finished. Therefore, please use this section to provide answers to the following questions relating to your institution and its capacity:

1.     Does your institution provide specialist Research Software Development or other IT support for DH/LOD projects?

2.     Is there a specialist DH unit or centre?

3.     Do you, or your institution, hold or host any relevant data collections, physical collections, or archives?

4.     Does your institution have hardware capacity for developing digital projects (e.g. specialist scanning equipment), or digital infrastructure facilities?

5.     How will you transfer knowledge, expertise, contacts and tools gained through your participation to your institution?

6.     Will your institution a) be able to contribute to the programme in any way, or b) offer you any practical support in developing any research of your own which arises from the programme? If so, give details.

7.     What metrics will you apply to evaluate the impact of the Ancient Itineraries programme a) on your own professional activities and b) on your institution?

Selection and timeline
All proposals will be reviewed by the Advisory Board, and members will be selected on the basis of their recommendations.

Please email the documents specified above as a single PDF document to stuart.dunn@kcl.ac.uk by Friday 1st June 2018, 16:00 (British Summer Time). We will be unable to consider any applications received after this. Please use the subject line “Ancient Itineraries” in your email. 

Applicants will be notified of the outcomes on or before 19th June 2018.

Privacy statement

All data you submit with your application will be stored securely on King’s College London’s electronic systems. It will not be shared, except in strict confidence with Advisory Board members for the purposes of evaluation. Furthermore your name, contact details and country of residence will be shared, in similar confidence, with the Getty Foundation to ensure compliance with US law and any applicable US sanctions. Further information on KCL’s data protection and compliance policies may be found here: https://www.kcl.ac.uk/terms/privacy.aspx; and information on the Getty Foundation’s privacy policies may be found here: http://www.getty.edu/legal/privacy.html.

Your information will not be used for any other purpose, or shared any further, and will be destroyed when the member selection process is completed.

If you have any queries in relation to how your rights are upheld, please contact us at digitalhumanites@kcl.ac.uk, or KCL’s Information Compliance team at info-compliance@kcl.ac.uk).

Research questions, abstract problems – a round table on Citizen Science

I recently participated in a round-table discussion entitled “Impossible Partnerships”, organized by The Cultural Capital Exchange at the Royal Institution, on the theme of Citizen Science; the Impossibe Partnerships of the title being those between the academy and the wider public. It is always interesting to attend citizen science events – I get so caught up in the humanities crowdsourcing world (such as it is) that it’s good to revisit the intellectual field that it came from in the first place. This is one of those blog posts whose main aim is to organize my own notes and straighten my own thinking after the event, so don’t read on if you are expecting deep or profound insights.

20170221_173602

Crucible of knowledge: the Royal Institution’s famous lecture theatre

Galaxy Zoo of course featured heavily. This remains one of the poster-child citizen science projects, because it gets the basics right. It looks good, it works, it reaches out to build relationships with new communities (including the humanities), and it is particularly good at taking what works and configuring it to function in those new communities. We figured that one of the common factors that keeps it working across different areas is its success in tapping in to intrinsic motivations of people who are interested in the content – citizen scientists are interested in science. There is also an element of altruism involved, giving one’s time and effort for the greater good – but one point I think we agreed on is that it is far, far easier to classify the kinds of task involved, rather than the people undertaking them. This was our rationale in that 2012 scoping study of humanities crowdsourcing.

A key distinction was made between projects which aggregate or process data, and those which generate new data. Galaxy Zoo is mainly about taking empirical content and aggregating it, in contrast, say, to a project that seeks to gather public observations of butterfly or bird populations. This could be a really interesting distinction for humanities crowdsourcing too, but one which becomes problematic where one type of question leads to the other. What if content is processed/digitized through transcription (for example), and this seeds ideas which leads to amateur scholars generating blog posts, articles, discussions, ideas, books etc… Does this sort of thing happen in citizen science (genuine question – maybe it does).  So this is one of those key distinctions between citizen science and citizen humanities. The raw material of the former is often natural phenomena – bird populations, raw imagery of galaxies, protein sequences – but in the latter it can be digital material that “citizen humanists” have created from whatever source.

Another key question which came up several times during the afternoon was the nature of science itself, and how citizen science relates to it. A professional scientist will begin an experiment with several possible hypotheses, then test them against the data. Citizen scientists do not necessarily organize their thinking in this way. This raises the question: can the frameworks and research questions of a project be co-produced with public audiences? Or do they have to be determined by a central team of professionals, and farmed out to wider audiences? This is certainly the implication of Jeff Howe’s original framing of crowdsourcing:

“All these companies grew up in the Internet age and were designed to take advantage of the networked world. … [I]t doesn’t matter where the laborers are – they might be down the block, they might be in Indonesia – as long as they are connected to the network.

Technological advances in everything from product design software to digital video cameras are breaking down the cost barriers that once separated amateurs from professionals. … The labor isn’t always free, but it costs a lot less than paying traditional employees. It’s not outsourcing; it’s crowdsourcing.”

So is it the case that citizen science is about abstract research problems – “are golden finches as common in area X now as they were five years ago?” rather than concrete research questions – “why has the population of golden finches declined over the last five years?”

For me, the main takeaway was our recognition citizen science and “conventional” science is not, and should not try to be, the same thing, and should not have the same goals. The important thing in citizen science is not to focus on the “conventional” scientific out comes of good, methodologically sound and peer-reviewable research – that is, at most, an incidental benefit – but on the relationships between professional academic scientists and non-scientists it creates; and how these can help build a more scientifically literate population. The same should go for the citizen humanities. We can all count bird populations, we can all classify galaxies, we call all transcribe handwritten text, but the most profitable goal for citizen science/humanities is a more collaborative social understanding of why doing so matters.

CAA1 – The Digital Humanities and Archaeology Venn Diagram

The question  ‘what is the digital humanities’ is hardly new; nor is discussion of the various epistemologies of which the digital humanities are made. However, the relationship which archaeology has with the digital humanities – whatever the epistemology of either – has been curiously lacking. Perhaps this is because archaeology has such strong and independent digital traditions, and such a set of well-understood quantitative methods, that the close analysis of of those traditions – familiar to readers of Humanist, say –  seem redundant. However, at the excellent CAA International conference in Southampton last week, there was a dedicated round-table session on the ‘Digital Humanities/Archaeology Venn Diagram’, in which I was a participant. This session highlighted that the situation is far more nuanced and complex that it first seems. As is so often the case with digital humanities.

A Venn Diagram, of course, assumes two or more discrete groups of objects, where some objects contain the attributes of only one group, and others share attributes of multiple groups. So – assuming that one can draw a Venn loop big enough to contain the digital humanities – what objects do they share with archaeology? As I have not been the first to point out, digital humanities is mainly concerned with methods. This, indeed, was the basis of Short and McCarty’s famous diagram. The full title of CAA – Computer Applications and Quantitative Methods in Archaeology – suggests that a methodological focus is one such object shared by both groups. However unlike the digital humanities, archaeology is concerned with a well defined set of questions. Most if not all, of these questions derive from ‘what happened in the past?’. Invariably the answers lie, in turn, in a certain class of material; and indeed we refer to collectively to this class as ‘material culture’.  And digital methods are a means that we use to the end of getting at the knowledge that comes from interpretation of material culture.

The digital humanities have much broader shared heritage which, as well as being methodological, is also primarily textual. This fact is illustrated by the main print publication in the field being called Literary and Linguistic Computing. It is not, I think, insignificant as an indication of how things have moved on that that a much more recently (2007)  founded journal has the less content-specific title Digital Humanities Quarterly. This, I suspect, is related to the reason why digitisation so often falls between the cracks in the priorities of funding agencies: there is a perception that the world of printed text is so vast that trying to add to the corpus incrementally would be like painting the Forth Bridge with a toothbrush (although this doesn’t affect my general view that the biggest enemy of mass digitisation today is not FEC or public spending cuts, but the Mauer im Kopf that form notions of data ownership and IPR). The digital humanities are facing a tension, as they always have, between variable availability of digital material, and the broad access to content that any porting over to the ‘digital’ that the word ‘humanities’ implies. As Stuart Jeffrey’s talk in the session made clear, the questions facing archaeology are more about what data archaeologists throw away: the emergence of Twitter, for example, gives an illusion of ephemerality, but every tweet adds to the increasing cloud of noise on the internet; and those charged with preserving the archaeological record in digital form must decide where where the noise ends and the record begins.

There is also the question of what digital methods *do* to our data. Most scholars who call themselves ‘digital humanists’ would reject the notion that textual analysis, which begins with semantic and/or stylometric mark-up is a purely quantitative exercise; and that qualitative aspects of reading and analysis arise from, and challenge, the additional knowledge which is imparted to a text in the course of encoding by an expert. However, as a baseline, it is exactly the kind of quantitative  reading of primary material which archaeology – going back to the early 1990s – characterized as reductionist and positivist. Outside the shared zone of the Venn diagram, then, must be considered the notions of positivism and reductionism: they present fundamentally different challenges to archaeological material than they do to other kinds of primary resource, certainly including text, but also, I suspect, to other kinds of ‘humanist’ material as well.

A final point which emerged from the session is the disciplinary nature(s) of archaeology and the digital humanities themselves. I would like to pose the question as to why the former is often expressed as a singular noun whereas the latter is a plural. Plurality in ‘the humanities’ is taken implicitly. It conjures up notions of a holistic liberal arts education in the human condition, taking in the fruits of all the arts and sciences in which humankind has excelled over the centuries. But some humanities are surely more digital than others. Some branches of learning, such as corpus linguistics, lend themselves to quantitative analysis of their material. Others tend towards the qualitative, and need to be prefixed by correspondingly different kinds of ‘digital’. Others are still more interpretive, with their practitioners actively resisting ‘number crunching’. Therefore, instead of being satisfied with ‘The Digital Humanities’ as an awkward collective noun, maybe we could look to free ourselves of the restrictions of nomenclature by recognizing that can’t impose homogeneity, and nor should we try to. Maybe we could even extend this logic, and start thinking in terms of ‘digital archaeologies’; of branches of archaeology which require (e.g.) archiving, communication, semantic web, UGC and so on; and some which don’t require any.  I can’t doubt that the richness and variety of the conference last week is the strongest argument possible for this.

CeRch seminar: Webometric Analyses of Social Web Texts: case studies Twitter and YouTube

Herewith a slightly belated report of the recent talk in the CeRch seminar series given by Professor Mike Thelwell of Wolverhampton University. Mike’s talk, Webometric Analyses of Social Web Texts: case studies Twitter and YouTube concerned getting useful information out of social media, primarily social science means: information, specifically, about the sentiment of the communications on those platforms. His group produces software for text based information analysis, making it easy to gather and process large scale data, focusing on Twitter, YouTube (especially the textual comments), and the web in general and the Technorati blog search engine, also Bing. This shows how a website is positioned on the web, and gives insights as to how their users are interacting with them.

In sentiment analysis, a computer programme reads text and predicts whether it is positive or negative in flavour; and how strongly that positivity or negativity is expressed. This is immensely useful in market research, and is widely employed by big corporations. It also goes to the heart of why social media works – they function well with human emotions, and tracks what role sentiments have in social media. The sentiment analysis engine is designed for text that is not written with good grammar. At its heart is a list of 2,489 terms which are either normally positive or negative. Each has a ‘normal’ value, and ratings of -2 – -5. Mike was asked if it could be adapted to slang words, which often develop, and sometime recede, rapidly.  Experience is that it copes well with changing language over time – new words don’t have a big impact in the immediate term. However, the engine does not appear to work with sarcastic statements which, linguistically, might have diction opposite to its meaning, now with (for example) ‘typical British understatement’. This means that it does not work very well for news fora, where comments are often sarcastic and/or ironic (e.g. ‘David Cameron must be very happy that I have lost my job’). There is a need for contextual knowledge – e.g. ‘This book has a brilliant cover’ means ‘this is a terrible book’, in the context of the phrase don’t judge a book by its cover. Automating the analysis of such contextual minute would be a gigantic task, and the project is not attempting to do so.

Mike also discussed the Cyberemotions project. This looked at peaks of individual words in Twitter, e.g. Chile, when the earthquake struck in February 2010. As might be expected, positivity decreased. But negativity increased only by 9%: it was suggested that this might have been to do with praise for the response of the emergency services, or good wishes to the Chilean people. Also, the very transience of social media means that people might not need to express sentiment one way or another. For example, simply mentioning the earthquake and its context would be enough to convey the message the writer needed to convey. Mike also talked about the sentiment engine’s analysis of YouTube. As a whole, most YouTube comments are positive, however those individual videos which provoke many responses are frequently negatively viewed.

Try the sentiment engine (www. http://sentistrength.wlv.ac.uk). One wonders if it might be useful in XML/RDF projects such as SAWS, or indeed to book reviews on publications such as http://www.arts-humanities.net.

Digital Classicist: Classical studies facing digital research infrastructures: from practice to requirements

Apologies are due to Agiatis Bernardou. I am a couple of weeks late posting my discussion of her paper in the Digital Classicist Seminar Series, Classical studies facing digital research infrastructures: from practice to requirements. Agiati is from the Digital Curation Unit, part of the “Athena” Research Centre, and her talk focused in the main on the preparatory phase of DARIAH, the European Arts and Humanities Research Infrastructure project. She began by outlining her own research background in Classics, which contained very little computing (it surely can’t be coincidence that the digital humanities is so full of former and practicing archaeologists and classicists).

DARIAH is technical and conceptual project. With the aim of providing  a research infrastructure for the Arts and Humanities across Europe. In practice, it is an umbrella for other projects, involving a big effort in the areas of law and finance, as well as technical infrastructure. A key part of this is to ensure that scholars in the arts and humanities are supported at each stage of the research lifecycle. This means ensuring that the requirements at each stage are understood. The DCU was part of the technical workpackage in DARIAH, and was tasked with doing this. Its approach was to develop a conceptual framework to map user requirements using an abstract model to represent the information practices within humanities research.

This included an empirical study of scholarly research activity. The main form of data collection was interviews with humanities scholars. The design of the study included transcription, coding and analysis of recordings of these interviews.  Context was provided by a good deal of previous work in this area, in the form of user studies of information browsing behaviour. In the 1980s, this carried the assumption that most humanists were ‘lone scholars’, with little interest in, or need for, collaborative practices. This however gave way to an increasingly self-critical awareness of how humanists work, highlighting practices such as annotation, which *might* be for the consumption of the lone scholar, which equally might be means for communication interpretation and thinking. This in turn led to a consideration of Scholarly primitives – low level, basic things humanities do both all the time and – often – at the same time. Agiatis cited the six types of information retrieval behaviour identified by D. Ellis, as revisited for the humanities by John Unsworth: Discovering, associating, comparing, referring, sampling, illustrating and representing.

The DCU’s aim was to produce a map of who does what and how. If one has a  research goal, for example to produce a commentary of Homer, what are the scholarly activities that one would need to achieve that, and what processes do those activities involve. To this end, Agiatis highlighted the following aspects that need to be mapped: Actor (researcher), Research activity, Research goal, information object, tool/service, format, and resource type.  The properties that link these include hasType, Creates, partOf, Searches, refersTo and Scholarly Activity.

A meaningful map of these processes must include meaningful descriptions of information types. DARIAH therefore has to embrace multiple interconnected objects, that need to be identified, represented, and managed, so they can be curated and reached throughout the digital research lifecycle. In this regard, there is a distinction that is second nature to most archaeologists,  between the visual representation of information, and hands-on access to objects.

The main interest of Agiati’s paper for me was the possibilities the DCU’s approach holds for specific research problems. One could easily see, for example, how the www.arts-humanities.net Methods Taxonomy could be better represented as a set of processes rather than as a static group of abstract entities, as it is at the moment. But if one could specify the properties of a particular purpose, the approach would be even more useful: for example one could test the efficacy of augmented reality by mapping the ways scholars engage with and use AR environments.

End of project MiPP workshop

At the closing MiPP project in Sussex last week. Due to a concatenation of various cirumstances, I had to take a large broomstick, which will be used in next week’s motion capture exercises at Butser Farm and in Sussex, on a set of trains from Reading, via the EVA London 2011 conference in Central London, to the workshop in Falmer, Sussex. Given this thing is six feet tall and required its own train seat (see picture), I got a variety of looks from my fellow passengers, especially on the Underground, ranging from suspicion to pity to humour. Imagined how one might have handled a conversation: ‘There’s a logical explanation. Yes, it’s going to be used as a prop in an experiment to test the environment of Iron Age round houses in cyberspace versus the real thing in the present day.’ ‘Oh yes? And that’s your idea of a logical explanation is it?’

Of course I could have really freaked people out be getting off the train at Gatwick Airport and wandering around the terminal, asking for directions to the runway.

As with the entire MiPP project, the workshop was highly interdisciplinary. A varied set of presentations included ones from Bernard Frisher of the University of Virginia, on digital representation of sculpture, and from colleagues at Southampton on the fantastic PATINA project. All of which coalesced  around questions of process, and how we represent it. Tom Frankland’s presentation on studying archaeological processes, including such offsite considerations as the difference between note taking in the lab and in the field, filled in numerous gaps of documentation that our work at Silchester last summer left.

When I got to my feet on day to two present, I veered slightly off my promised topic (as with most presentations I have ever given) and elected instead to reflect on the nature remediated archaeological objects. I would suggest that there is a three-way continuum on which any digital object representing an archaeological artefact or process may be plotted: the empirical, the interpretive and the conjectural. An empirical statement, such as Dr. Peter Reynolds, the founder of Butser Farm would have approved, might state that ‘the inner ring of this round house comprised of twelve upright posts, because we can discern twelve post holes in ring formation’.  An interpretative conclusion might be built on top of this stating that, because ceramic sherds were found in the post hole, cooking and/or eating took place near to this inner ring. This could in turn lead to a conjecture that a particular kind of meat was cooked in a particular way at this location, based not on interpretation or empirical evidence immediately to hand, but on the general context of the environment, and on what is known more broadly about Iron Age domestic practice.

More on all this next week, after capture sessions at Butser.