Call for members: Major new Institute opens at King’s College London with Getty Foundation support

The Project

The 18-month Institute in Digital Art History is led by King’s College London’s Department of Digital Humanities (DDH) and Department of Classics, in collaboration with HumLab at the University of Umeå, with grant support provided by the Getty Foundation as part of its Digital Art History initiative.

It will convene two international meetings where Members of the Institute will survey, analyse and debate the current state of digital art history, and map out its future research agenda. It will also design and develop a Proof of Concept (PoC) to help deliver this agenda. The source code for this PoC will be made available online, and will form the basis for further discussions, development of research questions and project proposals after the end of the programme.

To achieve these aims we will bring together leading experts in the field to offer a multi-vocal and interdisciplinary perspective on three areas of pressing concern to digital art history:

●       Provenance, the meta-information about ancient art objects,

●       Geographies, the paths those objects take through time and space, and

●       Visualization, the methods used to render art objects and collections in visual media.

Current Digital Humanities (DH) research in this area has a strong focus on Linked Open Data (LOD), and so we will begin our exploration with a focus on LOD. This geographical emphasis on the art of the ancient Mediterranean world will be continued in the second meeting to be held in Athens. The Mediterranean has received much attention from both the Digital Classics and DH communities, and is thus rich in resources and content. The programme will, therefore, bring together two existing scholarly fields and seek to improve and facilitate dialogue between them.

We will assign Members to groups according to the three areas of focus above. These groups will be tasked with producing a detailed research specification, detailing the most important next steps for that part of the field, how current methods can best be employed to make them, and what new research questions the participants see emerging.

The meetings will follow a similar format, with initial participant presentations and introductions followed by collaborative programme development and design activities within the research groups, including scoping of relevant aspects of the PoC. This will be followed by further discussion and collaborative writing which will form the basis of the event’s report. Each day will conclude with a plenary feedback session, where participants will share and discuss short reports on their activities. All of the sessions will be filmed for archival and note-taking purposes, and professional facilitators will assist in the process at various points.

The scholarly outputs, along with the research specifications for the PoC, will provide tangible foci for a robust, vibrant and sustainable research network, comprising the Institute participants as a core, but extending across the emerging international and interdisciplinary landscape of digital art history. At the same time, the programme will provide participants with support and space for developing their own personal academic agendas and profiles. In particular, Members will be encouraged to and offered collegial support in developing publications, both single- and co-authored following their own research interests and those related to the Institute.

 

The Project Team

The core team comprises of Dr Stuart Dunn (DDH), Professor Graeme Earl(DDH) and Dr Will Wootton (Classics) at King’s College London, and Dr Anna Foka of HumLab, Umeå University.

They are supported by an Advisory Board consisting of international independent experts in the fields of art history, Digital Humanities and LOD. These are: Professor Tula Giannini (Chair; Pratt Institute, New York), Dr Gabriel Bodard (Institute of Classical Studies), Professor Barbara Borg (University of Exeter), Dr Arianna Ciula (King’s Digital Laboratory), Professor Donna Kurtz (University of Oxford), and Dr Michael Squire (King’s College London).

 

Call for participation
We are now pleased to invite applications to participate as Members in the programme. Applications are invited from art historians and professional curators who (or whose institutions) have a proven and established record in using digital methods, have already committed resources, or have a firm interest in developing their research agendas in art history, archaeology, museum studies, and LOD. You should also be prepared to contribute to the design of the PoC (e.g. providing data or tools, defining requirements), which will be developed in the timeframe of the project by experts at King’s Digital Lab.

Membership is open to advanced doctoral students (provided they can demonstrate close alignment of their thesis with the aims of the programme), Faculty members at any level in all relevant fields, and GLAM curation professionals.

Participation will primarily take the form of attending the Institute’s two meetings:

King’s College London: 3rd – 14th September 2018

Swedish Institute at Athens: 1st-12th April 2019

We anticipate offering up to eighteen places on the programme. All travel and accommodation expenses to London and Athens will be covered. Membership is dependent upon commitment to attend both events for the full duration.

Potential applicants are welcome to contact the programme director with any questions: stuart.dunn@kcl.ac.uk.

To apply, please submit a single A4 PDF document set out as follows. Please ensure your application includes your name, email address, institutional affiliation, and street address.


Applicant Statement (ONE page)
This should state what you would bring to the programme, the nature of your current work and involvement of digital art history, and what you believe you could gain as a Member of the Institute. There is no need to indicate which of the three areas you are most interested in (although you may if you wish); we will use your submission to create the groups, considering both complementary expertise and the ability for some members to act as translators between the three areas.

Applicant CV (TWO pages)
This section should provide a two-page CV, including your five most relevant publications (including digital resources if applicable).

Institutional support (ONE page)
We are keen for the ideas generated in the programme to be taken up and developed by the community after the period of funding has finished. Therefore, please use this section to provide answers to the following questions relating to your institution and its capacity:

1.     Does your institution provide specialist Research Software Development or other IT support for DH/LOD projects?

2.     Is there a specialist DH unit or centre?

3.     Do you, or your institution, hold or host any relevant data collections, physical collections, or archives?

4.     Does your institution have hardware capacity for developing digital projects (e.g. specialist scanning equipment), or digital infrastructure facilities?

5.     How will you transfer knowledge, expertise, contacts and tools gained through your participation to your institution?

6.     Will your institution a) be able to contribute to the programme in any way, or b) offer you any practical support in developing any research of your own which arises from the programme? If so, give details.

7.     What metrics will you apply to evaluate the impact of the Ancient Itineraries programme a) on your own professional activities and b) on your institution?

Selection and timeline
All proposals will be reviewed by the Advisory Board, and members will be selected on the basis of their recommendations.

Please email the documents specified above as a single PDF document to stuart.dunn@kcl.ac.uk by Friday 1st June 2018, 16:00 (British Summer Time). We will be unable to consider any applications received after this. Please use the subject line “Ancient Itineraries” in your email. 

Applicants will be notified of the outcomes on or before 19th June 2018.

Privacy statement

All data you submit with your application will be stored securely on King’s College London’s electronic systems. It will not be shared, except in strict confidence with Advisory Board members for the purposes of evaluation. Furthermore your name, contact details and country of residence will be shared, in similar confidence, with the Getty Foundation to ensure compliance with US law and any applicable US sanctions. Further information on KCL’s data protection and compliance policies may be found here: https://www.kcl.ac.uk/terms/privacy.aspx; and information on the Getty Foundation’s privacy policies may be found here: http://www.getty.edu/legal/privacy.html.

Your information will not be used for any other purpose, or shared any further, and will be destroyed when the member selection process is completed.

If you have any queries in relation to how your rights are upheld, please contact us at digitalhumanites@kcl.ac.uk, or KCL’s Information Compliance team at info-compliance@kcl.ac.uk).

(Not quite) moving mountains: recording volcanic landscapes in digital gazetteers

Digital gazetteers have been immensely successful as means of linking and describing places online. GeoNames for example, now contains 10,000,000 geographical names corresponding to over 7,500,000 unique features. However, as we will be outlining at the ICA Digital Technologies in Cartographic Heritage next month in relation to the Heritage Gazetteer of Cyprus project, one assumption which often underlies them is fixity: an assumption that a name and a place and, by extension, its location on the Earth’s surface are immutably linked. This allows gazetteers to be treated as authorities. For example, a gazetteer with descriptions fixed to locations can be used to associate postal codes with a layer of contemporary environmental data and describe relationships between them; or to create a look-up list for the provision of services. It can also be very valuable for research, where a digital edition of a text has mentions of places. If contained in a parallel gazetteer, these can be used to provide citations and external authorities to those places, and also to other references in other texts.

However, physical geography changes. In the Aegean, where  the African tectonic plate is subducting beneath the Eurasian plate to the north, the South Aegean Volcanic Arc has been created, a band of active and dormant volcanic islands including the islands of  Aegina, Methana, Milos, Santorini, Kolumbo, and Kos, Nisyros and Yali. Each of these locations has a fixed modern aspect, and can be related to a record in a digital gazetteer.  However, these islands have changed over the years as a result of historical volcanism, and this history requires  the flexibility of a digital gazetteer to adequately represent it.

thera2

The island of Thera. The volcanic dome of Mt. Profitis Elias is shown viewed from the north.

I recently helped refine the entry in the Pleiades gazetteer for the Santorini Archipelago. Pleiades assigns a URI to each location, and allows information to be associated with that location via the URIs.  Santorini provides a case study of how multiple Pleaides URIs, associated with different time periods, can trace the history of the archipelago’s volcanism.  The  five present-day islands frame two ancient calderas, the larger formed more recently in the great Late Bronze Age eruption, and the other formed very much earlier in the region’s history. Originally, it is most likely that a single island was present, which fragmented over the millennia in response to the eruptions. Working backwards therefore, we begin with a URI for the islands as a whole:  http://pleiades.stoa.org/places/808255902. This covers the entire entity of the ‘Santorini Archipelago’. We associate all the names that have pertained to the island group through history – Καλλιστη (Calliste; 550 BC – 330 BC); Hiera (550 BC – 330 BC) and Στρογγύλη (Strongyle; AD 1918 – AD 2000), as well as the modern designation ‘Santorini Archipelago’ itself.  These four names have been used, at different times as either a collective term for all the islands, or, in the case of Strongyle, for the (geologically) original single island. This URI-labelled entity has lower-level connections with the islands that were formed during the periods of historic volcanism:  Therasia, Thera,  Nea Kameni, Mikro Kameni, Palea Kameni, Caimeni  and Aspronisi. Each, in turn, has its own URI.

thera1

The Santorini Archipelago in Pleiades

Mikro Kameni  and Caimeni are interesting cases as they no longer exist on the modern map. They are attested respectively by the naval survey of Thomas Graves of HMS Volgae (1851), and Caimeni was attested by Thomaso Porcacchi in 1620. Both formed elements of what are now the Kameini islands, but due to the fact that they have these distinct historical attestations, they are assigned URIs, with the time periods when they were known to exist according to the sources, even though they do not exist today.

This speaks to a wider issue of digital gazetteers, and their role in the understanding of past landscapes. With the authority they imbue to place-names, gazetteers might, if developed without reference to the nuances of landscape changes over time, potentially risk implicitly enforcing the view, no longer widely accepted, that places are immutable holders of history and historic events; where, in the words of Tilley in A Phenomenology of Landscape: Places, Paths and Monuments (1994), ‘space is directly equivalent to, and separate from time, the second primary abstracted scale according to which societal change could be documented and ‘measured’.’ (p. 9). The evolution of islands due to volcanism show clearly the need for a critical framework that avoids such approaches to historical and archaeological spaces.

Digital Classicist: Classical studies facing digital research infrastructures: from practice to requirements

Apologies are due to Agiatis Bernardou. I am a couple of weeks late posting my discussion of her paper in the Digital Classicist Seminar Series, Classical studies facing digital research infrastructures: from practice to requirements. Agiati is from the Digital Curation Unit, part of the “Athena” Research Centre, and her talk focused in the main on the preparatory phase of DARIAH, the European Arts and Humanities Research Infrastructure project. She began by outlining her own research background in Classics, which contained very little computing (it surely can’t be coincidence that the digital humanities is so full of former and practicing archaeologists and classicists).

DARIAH is technical and conceptual project. With the aim of providing  a research infrastructure for the Arts and Humanities across Europe. In practice, it is an umbrella for other projects, involving a big effort in the areas of law and finance, as well as technical infrastructure. A key part of this is to ensure that scholars in the arts and humanities are supported at each stage of the research lifecycle. This means ensuring that the requirements at each stage are understood. The DCU was part of the technical workpackage in DARIAH, and was tasked with doing this. Its approach was to develop a conceptual framework to map user requirements using an abstract model to represent the information practices within humanities research.

This included an empirical study of scholarly research activity. The main form of data collection was interviews with humanities scholars. The design of the study included transcription, coding and analysis of recordings of these interviews.  Context was provided by a good deal of previous work in this area, in the form of user studies of information browsing behaviour. In the 1980s, this carried the assumption that most humanists were ‘lone scholars’, with little interest in, or need for, collaborative practices. This however gave way to an increasingly self-critical awareness of how humanists work, highlighting practices such as annotation, which *might* be for the consumption of the lone scholar, which equally might be means for communication interpretation and thinking. This in turn led to a consideration of Scholarly primitives – low level, basic things humanities do both all the time and – often – at the same time. Agiatis cited the six types of information retrieval behaviour identified by D. Ellis, as revisited for the humanities by John Unsworth: Discovering, associating, comparing, referring, sampling, illustrating and representing.

The DCU’s aim was to produce a map of who does what and how. If one has a  research goal, for example to produce a commentary of Homer, what are the scholarly activities that one would need to achieve that, and what processes do those activities involve. To this end, Agiatis highlighted the following aspects that need to be mapped: Actor (researcher), Research activity, Research goal, information object, tool/service, format, and resource type.  The properties that link these include hasType, Creates, partOf, Searches, refersTo and Scholarly Activity.

A meaningful map of these processes must include meaningful descriptions of information types. DARIAH therefore has to embrace multiple interconnected objects, that need to be identified, represented, and managed, so they can be curated and reached throughout the digital research lifecycle. In this regard, there is a distinction that is second nature to most archaeologists,  between the visual representation of information, and hands-on access to objects.

The main interest of Agiati’s paper for me was the possibilities the DCU’s approach holds for specific research problems. One could easily see, for example, how the www.arts-humanities.net Methods Taxonomy could be better represented as a set of processes rather than as a static group of abstract entities, as it is at the moment. But if one could specify the properties of a particular purpose, the approach would be even more useful: for example one could test the efficacy of augmented reality by mapping the ways scholars engage with and use AR environments.

Digital Classicist: Aggregating Classical Datasets with Linked Data

Last week’s Digital Classicist seminar concerned the question of Linked Data, and its application to data about inscriptions. In his paper, Aggregating Classical Datasets with Linked Data, David Scott of the Edinburgh Parallel Computing Centre described the Supporting Productive Queries for Research (SPQR) project, a collaboration between EPCC and CeRch at KCL. The concept is that inscriptions contain many different kinds of information: information concerning personal names (gods, emperors, officials etc), places, concepts, and so on. When epigraphers and historians wish to use inscriptions for historical research, they undertake a reflexive and extremely unpredictable approach to building link s- both implicit and explicit – between different the kinds of information. SPQR’s long term aim is to facilitate these searches between data to make life easier for classicists and epigraphers to establish links between inscriptions. SPQR is using as case studies the Heidelberger Gesamtverzeichnis, the Inscriptions of Aphrodisias, and Inscriptions of Roman Tripolitania (the latter being the subject of a use case I undertook for the TEXTvre project last year). There have been a number of challenges in the preparation of the data. Epigraphers of course are not computer scientists; and there therefore do not prepare their data is such a way as to make their data machine-readable. The data can the fore be fuzzy, incomplete, uncertain and implicit or open to interpretation. Nor are epigraphers going to sit down and write programmes to do their analysis. Epigraphers have highly interactive workflows that are difficult to predict, but methodologically and in terms of research questions. When you answer one question inscriptions, too often it can lead you on to other questions of which the original workflow took no account. Epigraphic data therefore is distributed and has diverse representations. It can appear in Excel or Word, or in a relational database. It might be available via static or interactive webpages; or one might have to download a file. But there are overlaps in the content, in terms of e.g. places and persons which might be separate or contemporaneous.

The SPQR approach is based on URIs, where each subject and relationship are given URIs, and each object is a URI or literal. For example a subject could be http://insaph.kcl.ac.uk/iaph2007/iAph10002/, the object URI is a value for ‘material is…’ and the literal is ‘White marble’. This approach allows the user to build pathways of interpretation through sub-object units of the data.

SPQR is looking at inscriptions marked up in EpiDoc. In EpiDoc, one might find information on provenance; descriptions including date and language; edited texts; translations; findspots; and thematerial from which the inscriptions themselves were made. As my use case for IRT showed, the flexibility afforded by EpiDoc is of great value to digital epigraphers, that flexibility can also count against consistent markup. E.g. an object’s material can be represented as or as material>: bot a different representations of the same thing. SPQR is therefore is re-encoding the EpiDoc using uniform descriptions. The EpiDoc resources also contain references on findspots: name is given as ancientFindspot and modernFindspot (ancient findspot refers to the Barrington atlas; modern names to GeoNames). This is an example of data being linked together: reference sets containing both ancient and modern places are queried simultaneously. SPQR is based on the Linking and Querying Ancient Texts project, which used a relational database approach. The data – essentially, the same three datasets being used by SPQR – is stored as tables. Each row describes a particular inscription, and the columns contain attribute information such as date, place etc. In order to search across these, the user has to have all the tables available, or write an SQL query. This is not straightforward, since this relies on the data being consistently encoded and, as noted above, epigraphers using EpiDoc do not always encode things consistently.

The visual interface being used by SPQR is Gruff. This uses a straightforward colour coding approach, where the literals are yellow, the objects are grey, and the predicates represented as arrows of different colours, depending on the type of predicate.

SPQR Gruff interface

The talk was followed by a wide ranging discussion, which mostly centred on the nature of the things to be linked. There seemed to be a high level consensus that more needed to be done on the terminology behind the objects we are linking. If we are not careful in this then there is a danger that we will end up trying to represent the whole world (which perhaps would echo the big visions of some early adopters of CRM models a few years ago). As will no doubt be picked up in Charlotte Roueche and Charlotte Tupman’s presentation next week (which alas I will not be able to attend), all this comes down to defining units of information. EpiDoc, as a disciplined and rigorous mark-up schema gives us the basis for this, but there need to be very strict guidelines for its application in any given corpus.

Digital Classicist: Developing a RTI System for Inscription Documentation in Museum Collections and the Field

In the first of this summer’s Digital Classicist Seminar Series, Kathryn Piquette and Charles Crowther of Oxford discussed Developing a Reflectance Transformation Imaging (RTI) System for Inscription Documentation in Museum Collections and the Field: Case studies on ancient Egyptian and Classical material . In a well-focused discussion on the activities of their AHRC DEDEFI project of (pretty much) this name, they presented the theory behind RTI and several case studies.

Kathryn began by setting out the limitations of existing imaging approaches in documenting inscribed material. These include first hand observation, requiring visits to archives sites, museums etc. Advantages are that the observer can also handle the object, experiencing texture, weight etc. Much information can be gathered from engaging first hand, but the costs are typically high and the logistics complex. Photography is relatively cheap and easy to disseminate as a surrogate, but it fixed light position one is stuck with often means important features are missed. Squeeze making overcomes this problem, but you lose any sense of the material, and do not get any context. Tracing has similar limitations, but there is the risk of other information being filtered out. Likewise line drawings often miss erasures, tool marks etc; and are on many occasions not based on the original artefact anyway, which risks introducing errors. Digital photography has the advantage of being cheap and plentiful, and video cann capture people engaging with objects. Laser scanning resolution is changeable, and some surfaces do not image well. 3D printing is currently in its infancy. The key point is that all such representations are partial, and all impose differing requirements when one comes to analyse and interpret inscribed surfaces. There is therefore a clear need for fuller documentation of such objects.

Shadow stereo has been used by this team in previous projects to analyse wooden Romano British writing tablets. These tablets were written on wax, leaving tiny scratches in the underlying wood. Often reused, the scratches can be made to reveal multiple writings when photographed in light from many directions. It is possible then to build algorithmic models highlighting transitions from light to shadow, revealing letterforms not visible to the naked eye. The RTI approach used in the current project was based on 76 lights on the inside of a dome placed over the object. This gives a very, very high definition rendering of the object’s surface in 3D, exposed consistently by light from every angle. This ‘raking light photography’ takes images taken from different locations with a 24.5 megapixel camera, and the multiple captures are combined. This gives a sense not only of the objects surface, but of its materiality: by selecting different lighting angles, one can pick out tool marks, scrape marks, fingerprints and other tiny alterations to the surface. There are various ways of enhancing the images, all of which are suitable for identifying different kinds of feature. Importantly, as a whole, the process is learnable by people without detailed knowledge of the algorithms underlying the image process. Indeed one advantage of this approach is it is very quick and easy – 76 images can be taken in around in around five minutes. At present, the process cannot handle large inscriptions on stone, but as noted above, the highlight RTI allows more flexibility. In one case study, RTI was used in conjunction with a flatbed scanner, giving better imaging of flat text bearing objects. The images produced by the team can be viewed using an open source RTI viewer, with an ingenious add-on developed by Leif Isaksen which allows the user to annotate and bookmark particular sections of images.

The project has looked at several case studies. Oxford’s primary interest has been in inscribed text bearing artefacts, Southampton’s in archaeological objects. This raises interesting questions about the application of a common technique in different areas: indeed the good old methodological commons comes to mind. Kathryn and Charles discussed two Egyptian case studies. One was the Protodynastic Battlefield Palette. They showed how tools marks and making processes could be elicited from the object’s surface, and various making processes inferred. One extremely interesting future approach would be to combine RTI with experimental archaeology: if a skilled and trained person were to create a comparable artefact, one could use RTI to compare the two surfaces. This could give us deeper understanding about the kind of experiences involved in making an object such as the battlefield palette, and to base that understanding on rigorous, quantitative methodology.

It was suggested in the discussion that a YouTube video of the team scanning an artefact with their RTI dome would be a great aid to understanding the process. It struck me, in the light of Kathryn’s opening critique of the limitations of existing documentation, that this implicitly validates the importance of capturing people’s interaction with objects: RTI is another kind of interaction, and needs to be understood accordingly.

Another important question raised was how one cites work such as RTI. Using a screen grab in a journal article surely undermines the whole point. The annotation/bookmark facility would help, especially in online publications, but more thought needs to be given to how one could integrate information on materiality into schema such as EpiDoc. Charlotte Roueche suggested that some tag indicating passages of text that had been read using this method would be valuable. The old question of rights also came up: one joy of a one-year exemplar project is that one does not have to tackle the administrative problems of publishing a whole collection digitally.