Questions about placenames

The CHALICE project is led by EDINA at the University of Edinburgh and, along with ourselves at CeRch, partnered by Edinburgh’s Language Technology Group and the Centre for Data Digitization and Analysis in Belfast. The aim is to compile an RDF’able Linked Data gazetteer of placenames, derived from automated extraction of geospatial entities using the Edinburgh geoparser from the publication of the English Place Names Survey. Division of labour is EDINA leads and coordinates, LTG does the heavy lifting, CDDA does the digitization and we, CeRch, look at the medium and long term implementation of the gazetteer by developing use cases with real research projects.

So far, it seems that there might be an interesting link  up with CCH’s Prosopography of Anglo-Saxon England project. This contains no historical sources as such, more it is a collection of references and ‘sign posts’, whose aim is to build up lists of individuals who appear throughout the Anglo-Saxon sources. This includes a list of *modern* placenames associated with different references throughout the sources; and one useful application might be to connect the modern toponyms in EPNS with this, allowing searching from a separate collection using historic (i.e. Anglo-Saxon) variants. It seems to me – and I will be happy to be corrected by anyone with more philological credentials than myself – that the Anglo-Saxon material is probably the richest and most interesting seam of material for the kind of coordination that a CHALICE-type gazetteer can bring. Or maybe I am being unduly influenced by recently reading Archaeology, Place-Names & History: An Essay on Problems of Co-ordination by F. T. Wainwright (1962), recommended (and indeed lent) to me by Jo Walsh, CHALICE’s project manager  (and reviewed by the same here).

So last Friday, we all met at the EPNS’s premises in Nottingham (or rather the premises of the University of Nottingham’s Institute for Name Studies, which hosts them) for a JISC-funded kickabout on the subject. My sketchy summary of the discussion follows.

  • How can we develop gazetteers suitable for wider use? Probably by using standards which others do not have to bust a gut to follow, and which provide stability.
  • The Getty Thesaurus is an example of a stable gazetteer.  There are problems with Geonames, but it lacks stability in terms of content, *but* by publishing stable URIs it at least documents and exposes that instability. C.f. for example the sameas.org concept definition service, which was mentioned a few times: it seems that that, as an abstract entity on the linked data web, the instance of ‘Stuart Dunn’ that I am pretty sure refers to me in fact belongs to several different higher level entities. Whether that is due to the Web’s instability or my own, I’m not sure.
  • It was noted that while the concept is constant, URLs can become inappropriate – e.g. the Vision of Britain website has data from Estonia.
  • OS research has looked at issues such as namespace hosting  – this has important implications for going beyond geographical areas (such as England).
  • Different people produce different things: are these different resources, or can they bought together as a single resource? Theoretically they can, but much of all the EPNS’s material is on paper. There is nothing in the structure that would forbid it, but it is not digital.

A definitive report of the meeting will be produced in due course.

Grey Literature

There’s an interesting discussion going on on the Forum for Information Heritage Standards in Heritage list, on which I have been lurking and keeping one eye, concerned with the theme of standardizing grey literature. For the non cognoscenti, grey literature records are reports about heritage objects and activities (at least in this context, but the term is known in other areas too), especially archaeological excavations, which are not widely available and therefore not widely read. The thread has been started by the data standards section of English Heritage, with the aim of establishing how the heritage community might go about standardizing the reporting process, and thereby making the grey literature more accessible. Numerous approaches have been discussed – such as Catherine Hardman of the ADS mentioning the A&H e-Science programme-funded Archaeotools project, which uses natural language processing to index grey lit. on the basis of what, where and when entities after it has been deposited – although,  of course, this depends on the resource being digital in the first place (or has been digitized), which, of course, it  may not have been — the virtues of paper record keeping have been aired in the discussion, and clearly no one is suggesting doing away with it.

My own thoughts: the word ‘literature’ of course implies a fundamentally non-digital way of doing things. Lief Isaksen has raised, on the list, the importance of  ‘grey data’, and I think this raises fundamental issues of *how the reports are compiled*, or rather how they could or should be. As I and others have discussed elsewhere, the process of gathering data in archaeology and heritage is faced with many new digital opportunities: the good old VERA project at Reading being a case in point. In many cases, perhaps we should be thinking of some elements of grey literature – and only some, before anyone writes any angry comments – those reports which document projects which are already gathering significant amounts of data digitally – should be seen as documentation and interpretation of that data, drawing perhaps on some of the good practice models of the old AHDS. This would enable the depositor to ‘tie’ the report to whatever format the data may be in – photos, GIS/GPS points, spreadsheets, etc. ‘Standardization’, such as it is, could then be drawn from schema types such as RDF. In such cases, why go through the fundamentally ‘literary’ process of compiling a grey literature report, when something much more lightweight could and should be possible in the digital age?

“Reading” maps

Went for a nice walk at the weekend from Reading along the Kennet and Avon canal to Aldemaston. Took the two relevant 1:50000 OS Landrangers, depsite the fact that the whole route lies on the towpath, thus making getting lost difficult. A very pleasant stroll apart from an exchange of Anglo-Saxon pleasantries with a careering lycra lout on a tensile graphite framed, Boudica razor tyred, Effoff Mk II Superbike. Despite following such a topographically well-defined route on the OS however, I still noticed a strong, and entirely irrational, impluse is to follow the route on the sheet. I found myself obsessively checking the path at every bridge, lock and level crossing. This is an issue addressed by Mike Parker in his recent book, Map Addict (Harper Collins 2010). This wonderful tome rambles wanders with glorious inconsistency around the obsessions and experiences of Parker, a self-confessed ‘consummate map junkie’; and one stop upon the journey is a discussion of gender-specific mapreading, in the process debunking of the myth that women cannot read maps. The distinction Parker draws is between a ‘male’ impulse to plan routes, measure distances and note waymarkers, versus a ‘female’ impulse to navigate by semantics and points of (personal) significance. This is just one issue in so-called called ‘cognitive spatial literacy’  (see, e.g. this paper) which is likely to become more and more important not just in research as ‘virtual world’ tools become more prevalent, but also in how research is done. It’s critical to note that there are certain assumptions in such ‘datascapes’, and one important way of characterizing these is how we perceive the data we are observing. On the other hand, one can’t tar all spatial digital representations with this brush (a point made very eloquently by Parker in the chapter entitled ‘Pratnav’); they have been there, even in the venerable Ordnance Survey. To give one example, reprising my post from February about battle sites, an article in the current issue of Sheetlines, the journal of the Charles Close Society notes the location in OS of several historic battles, but in doing so draws attention to the fact that these are represented as authoritative points, when actually they probably weren’t. In other words it invites a kind of ‘spatial reading’ that subject might not justify.

AHeSSC atque Vale

A slight hiatus on the blog front. Not, for once, due to idleness or indolence (at least not entirely), but more due to a faulty laptop and extended absence from the office.

Last Friday saw the final projects meeting of the Arts and Humanities e-Science Initiative, held in the pleasant surroundings of the Anatomy Theatre and Museum at King’s College London. This was a great opportunity for a last reflect on what the seven projects have achieved, and where things might go beyond the end of the funding (AHeSSC itself finishes at the end of this month). I was somewhat taken with the term ‘digital craftsmanship’, which implies some concept of ‘making’. Certainly the first four presentations – from Medieval Warfare on the Grid, eSAD, e-Curator and Archaeotools, all of which have an historical or archaeological interest of some kind, one can detect commonalities of the ‘making’ variety: making a hypothesis based on Agent-based Modelling; building an interpretation using an interpretation support ontology, forming questions of what, why or when around distributed datasets, and so on. And the three that followed – e-Dance, Purcell Plus and MusicSpace are similarly concerned with digital creativity. It occurred to me that it is useful to think of these different kinds of making ‘things’ on one side, and on the other side the intellectual and/or interpretive things you can do with them on the other: reception studies, digital repatriation of cultural artefacts – providing a digital replica of an artefact removed from another country to that country (no one is claiming that any of the modes of making are perfect, or would please everyone), reading and understanding texts, visualising and de-constructing interpretive processes… And in the middle you have the difficult things that enable and hinder mapping from one side to the other: the absence of mass digitization programmes that steer engagement with digital content in ways that are (or can be) totally at odds with what is interesting, or what it might be intellectually desirable to do; copyright (ugh, don’t go there, say especially those concerned with music), the fact that most engagements with these technologies are driven by individual research questions and the success (or otherwise) of individual project grants, and not by overarching research paradigms. This, I think, has been both the upside and the downside of the Initiative: it has – wonderfully – fulfilled its aim, set out in 2005, to be driven by humanities and arts research questions. The problem now is that it is only driven by humanities and arts research questions. Which begs the question of how this work can be sustained when there is no Initiative to support it.
What will ride to the rescue? The Digital Economy? The problem with the digital economy is that it is going through an analogue recession: this means that when our paymasters say they want us to collaborate, it is not because they like collaboration; it is because they think it will bring in the folding stuff. Not a long term model. Perhaps we should just accept that this will be a very, very long and slow process, and – even though the realisation that e-science is NOT just Grid has come about in less than five years, sustaining and growing the kind of fantastic, ground breaking research that the Initiative has been able to support in the seven projects, six workshops and three demonstrators will take a long time. As was said several times in the workshop, it will take engagement with the research councils, a recognition (not least by them) that the benefits will not all come in the short term, and an awareness to capitalize on highly relevant concepts such as Linked Data.

It’s by no means all doom and gloom. In a very upbeat summing up, Dave De Roure noted that the Digital Humanities have been around considerably longer than e-Science, and may yet outlast it notwithstanding the recent trenchant analysis of Melissa Terras in her keynote at Digital Humanities 2010). The work of the projects has been, by any standard, world leading in the field, and the opportunities which have been created – and which have been exploited by our colleagues – are surely unquestionable. And as Dave pointed out, we have been able to look well beyond so-called ‘acceleration of research’ – doing things faster, cheaper and bigger – and instead done new things, and done them better. And I think there is a lesson about what kind of support a programme of this type needs, which is is equally interesting. In 2006 we, AHeSSC, were commissioned to provide helpdesk-type support, but I think it is probably fair to say that something a little more sophisticated was needed and – hopefully – provided.

MiPP (1)

And so begins our Motion in Place Platform project, an AHRC DEDEFI grant that CeRch has with colleagues in Sussex and Bedford. The idea is to assess how performance documentation technologies can be used to capture and describe the archaeological research process. The aim is to reconsider and reconceptualize how archaeology is done, and to look at different approaches to the 3D reconstruction and understanding of heritage sites. Thanks to the kind permission of Professor Michael Fulford at the University of Reading, we are able to use the marvellous Silchester Roman Town excavation in Hampshire as a test bed. Silchester is a wonderful panorama of Iron Age and Imperial Roman occupation, leading to complete abandonment and thus fantastic preservation of the stratigraphies – but a big and complicated dig, which poses some daunting challenges for our project.

Last week, Matt Earley and Alex Chasmar from Animazoo were on site testing the kit for complete unknowns, like can ultrasonic motion trackers actually work out doors, near a big and noisy generator

MoCap tests
? The answer is yes, fortunately, they can (if it didn’t we would have had a problem). The tests went extremely well, the only possible variable being if we get a strong wind (likely, in such an exposed spot).

DARIAH workshop in Athens

Last week I was in Athens organized by the DARIAH project entitled ‘Scholarly activity and information process’

This was principally about understanding the processes of research that an e-infrastructure – whatever that might be – underpins and supports. Numerous perspectives emerged on how such a process might be conceptualized; but I think what emerged as the common theme was definition, and how we define the things we are talking about. For example, a starting point for much of the workshop was John Unsworth’s conception of the ‘scholarly primitive’; building blocks of research which, in a 2000 paper, he defined as ‘Discovering, Annotating, Comparing, Referring, Sampling, Illustrating and Representing‘. Seamus Ross’s critique of this however conceived of these more as processes; whereas a primitive should be seen as something which engages more fundamentally with generating knowledge from ‘primary data’ (itself a thing which used to have a widely accepted definition but, I would suggest, is much harder to pin down in the digital age). One example he gave was ‘question forming’ – which, of course, is not a primitive aspect of research that is confined to the digital milieu. Simon Mahony from UCL developed this idea with a perspective on the titles of projects his students come up with – which rarely include an actual question, which defines how the work they will do will bring new perspectives.

For me, it was interesting that this question of ‘what is the building block of [digital] humanities research’ reflects so closely the discussion in the last year or so on e-science fundamentals. Both areas – digital humanities and e-science – I think, share a implicit desire to show that they are fully professionalized academic disciplines, which I have no problem with (despite my own suspicion that academic disciplines are themselves basically nineteenth century concoctions to make Oxbridge colleges look tidier). But the problem is always one of language and description. This also applies to research methods, as well as object of research. César González-Pérez’s very interesting presentation on methods, for example, introduced the idea of the ‘method fragment’, a particular way of approaching or manipulating information, which can be defined consistently, and linked to others in a non-linear way to describe an overarching workflow. (The non-linear bit is, I think, crucial). This agrees well with the position set out in a forthcoming paper in the proceedings of AHM2009 by myself, Sheila Anderson and Tobias Blanke, which utilizes Short and McCarty’s famous ‘Methodological Commons’ for the digital humanities. However, again we come back to the problem of definition and language. It is convenient and logical for us, as developers and providers of a research e-infrastructure, to conceive of the research process in such a way, but we also have to remember that an historian about to embark on a research project does not go to their bookshelf, take down the Big Bumper Book of History Research Methods, select one, and stick to it for two years. Even if such a book existed, and if it were fully comprehensive, footnoted, agreed by the history research community (the economic history community? Or political history? Or social history? Since when have academics every agreed about such things anyway?), they would select, choose, modify, ignore, change, make it up as they go along… and if an e-infrastructure gets in the way of that, it is doomed. I was also glad to have the opportunity to get off my chest a problem I have with the word ‘tool’ to describe a software application, interface etc… a hammer is not likely to give me ideas and thoughts on better and better ways to knock in nails. However, a piece of research software might – if it is any good – give me pause to think about how I approach data, and to think computationally about the knowledge I could generate by analyzing it. As Alexandra Bounia made brilliantly clear in her presentation – which invited us to think what research we would do if we were putting together a museum exhibition, and how we would do it and why – we are talking here about a whole lot more than acquiring, storing and distributing data. An obvious point maybe, but one that is too important not to be made explicitly in such discussions.

Digital Classicist 2010 summer seminar programme

Digital Classicist 2010 summer seminar programme
Institute of Classical Studies

Meetings are on Fridays at 16:30 in room STB9 (Stewart House) Senate House, Malet Street, London WC1E 7HU

ALL WELCOME

Seminars will be followed by refreshments

  • Jun 4 Leif Isaksen (Southampton) Reading Between the Lines: unearthing structure in Ptolemy’s Geography
  • Jun 11 Hafed Walda (King’s College London)_ and Charles Lequesne
    (RPS Group) Towards a National Inventory for Libyan Archaeology
  • Jun 18 Timothy Hill (King’s College London) After Prosopography? Data modelling, models of history, and new directions for
    a scholarly genre.
  • Jun 25 Matteo Romanello (King’s College London) Towards a
    Tool for the Automatic Extraction of Canonical References
  • Jul 2 Mona Hess (University College London) 3D Colour Imaging
    For Cultural Heritage Artefacts
  • Jul 16 Annemarie La Pensée (National Conservation Centre) and
    Françoise Rutland (World Museum Liverpool) Non-contact 3D laser scanning as a tool to aid identification and interpretation of archaeological artefacts: the case of a Middle Bronze Age Hittite Dice
  • Jul 23 Mike Priddy (King’s College London) On-demand Virtual
    Research Environments: a case study from the Humanities
  • Jul 30 Monica Berti (Torino) and Marco Büchler (Leipzig)
    Fragmentary Texts and Digital Collections of Fragmentary Authors
  • Aug 6 Kathryn Piquette (University College London) Material
    Mediates Meaning: Exploring the artefactuality of writing utilising qualitative data analysis software
  • Aug 13 Linda Spinazzè (Venice)_ Musisque Deoque. Developing new
    features: manuscripts tracing on the net
  • For more information on individual seminars and updates on the
    programme, see http://www.digitalclassicist.org/wip/

    Launch of IRT and IRCyr

    The workshop bought out many fascinating aspects of Libyan archaeology. As well as contributions from Libyan colleagues, presenters including Paul Bennett, Will Wootton from KCL, and Chris Blandford, who is preparing a World Heritage Management Plan for Cyrene. Also from KCL, Charlotte Roueche and Hafed Walda launched the Inscriptions of Roman Tripolitania and Inscriptions of Roman Cyrenaica websites. These a digital repositories of the corpora of inscriptions, gathered in Libya the 1950s by Joyce Reynolds, with the inscriptions marked up in EpiDoc XML What is most significant about these is that by using EpiDoc and describing the information within them using this standard, they provide the core for future work that others can build on in the digital sphere. It strikes me – a notion only reinforced by conversations with others here, including Charlotte, Hafed and Will – that the majority of the challenges will be non-technical. In the humanities we see publication as the final, finished thing (a view constantly reinforced by those in charge of pay, promotion and tenure), yet one of the most interesting aspects of the kind of work we do lies somewhere in between doing interpretive research and publishing it. How do we represent the interpretive process in the publication, while preserving its integrity and authority? This question is absolutely fundamental if we to get meaningful knowledge by applying computational approaches in the humanities. And it seems rather beautifully ironic that it should arise so prominently and urgently in the field of epigraphy – inscriptions would seem to be the ultimate ‘final publication’, but hey are not: walking round Leptis and Sabratha for the first time bought home the re-use and re-contextialization that an inscription can undergo before it comes to our attention, but before it was ‘published’ – it can be altered, erased, the block it is inscribed on used for something else entirely and so on. And the use of EpiDoc shows that we can continue building knowledge about them and their associations long after they have been deciphered, translated and pinpointed on the map.

    Returning cultural artefacts

    This week has seen some extraordinary events here in Tripoli, at the centre of which are three extraordinary people: Ken Oliver, Jeanne Hugo and Liz Barnett. In the last few years, these three have come to possess artefacts from Libya’s Roman past of great historical value. The appropriate time having arrived, all three have taken the decision – some would say it is a momentous one – to return these objects to their original location in Libya. Various news media have picked up on this including the BBC, and no doubt others have followed suit. Some years of negotiation and logistical planning have led up to this week’s handover of the artefacts to the Libyan authorities in Tripoli, supported by numerous people and organizations both in the UK and in Libya. The handover itself was marked last week by a ceremony in the Museum of Libya presided over by Mr. Saleh A. Abdalah, the Chairman of the Department of Antiquities. This was followed by an exhibition of the artefacts in the Museum’s soaring atrium, and a day-long archaeological workshop to which I and others from KCL contributed.

    The artefacts include the ‘Belgammel Ram’, which was returned by Ken. It is a bronze alloy battering ram from the prow of an ancient warship, found on the seabed near the mouth of the Wadi Belgammel (‘River of Lice’) in the 1960s. The ram is a magnificent, beautiful piece of metalwork engineering: one can readily see how it must have captured the imagination when it was discovered. Extensive scientific analysis was carried out on it in the UK prior to its return to Tripoli, and the ceremony included the presentation of reports on this work to Mr. Abdalah by Paul Bennett of the Society for Libyan Studies.

    The other artefacts, mainly domestic objects, are being returned by Liz and Jeanne, whose family have owned them since the 1950s when their father was a headmaster at a British army base in Libya. They include an assemblage of coins, mainly from the Christian period, but with at least five Islamic examples, some fragments of Roman glass including a blown flask with a separately attached neck and handle, terracotta figurines including two wrestlers, a gaming counter, miscellaneous beads, several small oil lamps, a larger lamp in the shape of a follower of Bacchus, and several chunks of floor plaster with mosaics attached. All of these will now go the the museum at Leptis Magna, the site from whence they originally came.

    The Museum of Tripoli itself is well worth a visit – we had a chance for a good look round in the hours before the ceremony – housed in a palatial building of the early twentieth century. It comprises two floors beneath three great polychrome domes, blue and white glass on the inside, blinding gold on the outside. Outside on the street, posters proclaimed the return of the artefacts. The collection is a varied cross-section of Libyan society and history, starting logically enough with a prehistoric room, proceeding chronologically across two floors to rooms devoted to contemporary Libya, and also to its future. Whole rooms on the ground floor given to Leptis Magna and Sabratha, with statuary the main focus. Interesting to be reminded of the contrast between the solid soldier forms, and the more feminine draped figures… the old Mars and Venus thing, I guess. Much use is made of The Digital in this museum. Interactive 3D displays, some of them based on projecting lasers into vapour, monophonic chambers where you can stand under a sort of plastic umbrella and be the sole audience of a commentary (in Arabic) of the display in front of you, and a CAVE-live room with a floor on to which is projected an image of water, which moves as you walk across it. Very nice.

    The ceremony with which the artefacts were received this week, and the many conversations we have had about them, has given us a chance to reflect on the whole vexed issue of the return of cultural and archaeological objects to their ‘source countries’. Outside Libya, objects are detached from their human and cultural contexts. There must be many people out there – Libyans and non-Libyans – who have come to own such artefacts over the years. Hopefully this week’s events will raise wider awareness that there is both a practical model and a powerful intellectual argument for their return to Libya, and their public display there.

    Sabratha
    Statue at Sabratha