Call for members: Major new Institute opens at King’s College London with Getty Foundation support

The Project

The 18-month Institute in Digital Art History is led by King’s College London’s Department of Digital Humanities (DDH) and Department of Classics, in collaboration with HumLab at the University of Umeå, with grant support provided by the Getty Foundation as part of its Digital Art History initiative.

It will convene two international meetings where Members of the Institute will survey, analyse and debate the current state of digital art history, and map out its future research agenda. It will also design and develop a Proof of Concept (PoC) to help deliver this agenda. The source code for this PoC will be made available online, and will form the basis for further discussions, development of research questions and project proposals after the end of the programme.

To achieve these aims we will bring together leading experts in the field to offer a multi-vocal and interdisciplinary perspective on three areas of pressing concern to digital art history:

●       Provenance, the meta-information about ancient art objects,

●       Geographies, the paths those objects take through time and space, and

●       Visualization, the methods used to render art objects and collections in visual media.

Current Digital Humanities (DH) research in this area has a strong focus on Linked Open Data (LOD), and so we will begin our exploration with a focus on LOD. This geographical emphasis on the art of the ancient Mediterranean world will be continued in the second meeting to be held in Athens. The Mediterranean has received much attention from both the Digital Classics and DH communities, and is thus rich in resources and content. The programme will, therefore, bring together two existing scholarly fields and seek to improve and facilitate dialogue between them.

We will assign Members to groups according to the three areas of focus above. These groups will be tasked with producing a detailed research specification, detailing the most important next steps for that part of the field, how current methods can best be employed to make them, and what new research questions the participants see emerging.

The meetings will follow a similar format, with initial participant presentations and introductions followed by collaborative programme development and design activities within the research groups, including scoping of relevant aspects of the PoC. This will be followed by further discussion and collaborative writing which will form the basis of the event’s report. Each day will conclude with a plenary feedback session, where participants will share and discuss short reports on their activities. All of the sessions will be filmed for archival and note-taking purposes, and professional facilitators will assist in the process at various points.

The scholarly outputs, along with the research specifications for the PoC, will provide tangible foci for a robust, vibrant and sustainable research network, comprising the Institute participants as a core, but extending across the emerging international and interdisciplinary landscape of digital art history. At the same time, the programme will provide participants with support and space for developing their own personal academic agendas and profiles. In particular, Members will be encouraged to and offered collegial support in developing publications, both single- and co-authored following their own research interests and those related to the Institute.

 

The Project Team

The core team comprises of Dr Stuart Dunn (DDH), Professor Graeme Earl(DDH) and Dr Will Wootton (Classics) at King’s College London, and Dr Anna Foka of HumLab, Umeå University.

They are supported by an Advisory Board consisting of international independent experts in the fields of art history, Digital Humanities and LOD. These are: Professor Tula Giannini (Chair; Pratt Institute, New York), Dr Gabriel Bodard (Institute of Classical Studies), Professor Barbara Borg (University of Exeter), Dr Arianna Ciula (King’s Digital Laboratory), Professor Donna Kurtz (University of Oxford), and Dr Michael Squire (King’s College London).

 

Call for participation
We are now pleased to invite applications to participate as Members in the programme. Applications are invited from art historians and professional curators who (or whose institutions) have a proven and established record in using digital methods, have already committed resources, or have a firm interest in developing their research agendas in art history, archaeology, museum studies, and LOD. You should also be prepared to contribute to the design of the PoC (e.g. providing data or tools, defining requirements), which will be developed in the timeframe of the project by experts at King’s Digital Lab.

Membership is open to advanced doctoral students (provided they can demonstrate close alignment of their thesis with the aims of the programme), Faculty members at any level in all relevant fields, and GLAM curation professionals.

Participation will primarily take the form of attending the Institute’s two meetings:

King’s College London: 3rd – 14th September 2018

Swedish Institute at Athens: 1st-12th April 2019

We anticipate offering up to eighteen places on the programme. All travel and accommodation expenses to London and Athens will be covered. Membership is dependent upon commitment to attend both events for the full duration.

Potential applicants are welcome to contact the programme director with any questions: stuart.dunn@kcl.ac.uk.

To apply, please submit a single A4 PDF document set out as follows. Please ensure your application includes your name, email address, institutional affiliation, and street address.


Applicant Statement (ONE page)
This should state what you would bring to the programme, the nature of your current work and involvement of digital art history, and what you believe you could gain as a Member of the Institute. There is no need to indicate which of the three areas you are most interested in (although you may if you wish); we will use your submission to create the groups, considering both complementary expertise and the ability for some members to act as translators between the three areas.

Applicant CV (TWO pages)
This section should provide a two-page CV, including your five most relevant publications (including digital resources if applicable).

Institutional support (ONE page)
We are keen for the ideas generated in the programme to be taken up and developed by the community after the period of funding has finished. Therefore, please use this section to provide answers to the following questions relating to your institution and its capacity:

1.     Does your institution provide specialist Research Software Development or other IT support for DH/LOD projects?

2.     Is there a specialist DH unit or centre?

3.     Do you, or your institution, hold or host any relevant data collections, physical collections, or archives?

4.     Does your institution have hardware capacity for developing digital projects (e.g. specialist scanning equipment), or digital infrastructure facilities?

5.     How will you transfer knowledge, expertise, contacts and tools gained through your participation to your institution?

6.     Will your institution a) be able to contribute to the programme in any way, or b) offer you any practical support in developing any research of your own which arises from the programme? If so, give details.

7.     What metrics will you apply to evaluate the impact of the Ancient Itineraries programme a) on your own professional activities and b) on your institution?

Selection and timeline
All proposals will be reviewed by the Advisory Board, and members will be selected on the basis of their recommendations.

Please email the documents specified above as a single PDF document to stuart.dunn@kcl.ac.uk by Friday 1st June 2018, 16:00 (British Summer Time). We will be unable to consider any applications received after this. Please use the subject line “Ancient Itineraries” in your email. 

Applicants will be notified of the outcomes on or before 19th June 2018.

Privacy statement

All data you submit with your application will be stored securely on King’s College London’s electronic systems. It will not be shared, except in strict confidence with Advisory Board members for the purposes of evaluation. Furthermore your name, contact details and country of residence will be shared, in similar confidence, with the Getty Foundation to ensure compliance with US law and any applicable US sanctions. Further information on KCL’s data protection and compliance policies may be found here: https://www.kcl.ac.uk/terms/privacy.aspx; and information on the Getty Foundation’s privacy policies may be found here: http://www.getty.edu/legal/privacy.html.

Your information will not be used for any other purpose, or shared any further, and will be destroyed when the member selection process is completed.

If you have any queries in relation to how your rights are upheld, please contact us at digitalhumanites@kcl.ac.uk, or KCL’s Information Compliance team at info-compliance@kcl.ac.uk).

Corpse roads

In the last few years, I have been gathering information on early-modern ideas and folklore around so-called “corpse roads”, which date from before such things as metalled transport networks and the Enclosures. When access to consecrated burial grounds was deliberately limited by a Church wanting to preserve its authority (and burial fees), an inevitable consequence was that people had to transport their dead, sometimes over long distances, for interment. A great deal of superstition and “fake news” grew up around some of these routes, for example – as I shall be blogging shortly – the belief that any route taken by a bier party over private land automatically became a public right of way. They seem to have had a particular significance in rural communities in the North West of England, especially Cumbria.

The idea of the corpse road is certainly an old one. In A Midsummer Night’s Dream, Puck soliloquizes: Now it is the time of night/That the graves all gaping wide/Every one lets forth its sprite/In the church-way paths to glide.

In my view, corpse roads – although undoubtedly a magnet for the eccentric and the off-the-wall – are a testimony to the imaginative power of physical progress through the landscape at crux points in life (and death), and of the kinds of imperatives which drove connections through those landscapes. As Ingold might say, they are very particular form of “task-scape”. I am interested in why they became important enough, at least to some people, for Shakespeare to write about them.

Here is a *very* early and initial  dump of start and finish points of corpse roads that I’ve been able to identify, mostly in secondary literature. I hope to be able to rectify/georeference each entry more thoroughly as and where time allows.

 

Research questions, abstract problems – a round table on Citizen Science

I recently participated in a round-table discussion entitled “Impossible Partnerships”, organized by The Cultural Capital Exchange at the Royal Institution, on the theme of Citizen Science; the Impossibe Partnerships of the title being those between the academy and the wider public. It is always interesting to attend citizen science events – I get so caught up in the humanities crowdsourcing world (such as it is) that it’s good to revisit the intellectual field that it came from in the first place. This is one of those blog posts whose main aim is to organize my own notes and straighten my own thinking after the event, so don’t read on if you are expecting deep or profound insights.

20170221_173602

Crucible of knowledge: the Royal Institution’s famous lecture theatre

Galaxy Zoo of course featured heavily. This remains one of the poster-child citizen science projects, because it gets the basics right. It looks good, it works, it reaches out to build relationships with new communities (including the humanities), and it is particularly good at taking what works and configuring it to function in those new communities. We figured that one of the common factors that keeps it working across different areas is its success in tapping in to intrinsic motivations of people who are interested in the content – citizen scientists are interested in science. There is also an element of altruism involved, giving one’s time and effort for the greater good – but one point I think we agreed on is that it is far, far easier to classify the kinds of task involved, rather than the people undertaking them. This was our rationale in that 2012 scoping study of humanities crowdsourcing.

A key distinction was made between projects which aggregate or process data, and those which generate new data. Galaxy Zoo is mainly about taking empirical content and aggregating it, in contrast, say, to a project that seeks to gather public observations of butterfly or bird populations. This could be a really interesting distinction for humanities crowdsourcing too, but one which becomes problematic where one type of question leads to the other. What if content is processed/digitized through transcription (for example), and this seeds ideas which leads to amateur scholars generating blog posts, articles, discussions, ideas, books etc… Does this sort of thing happen in citizen science (genuine question – maybe it does).  So this is one of those key distinctions between citizen science and citizen humanities. The raw material of the former is often natural phenomena – bird populations, raw imagery of galaxies, protein sequences – but in the latter it can be digital material that “citizen humanists” have created from whatever source.

Another key question which came up several times during the afternoon was the nature of science itself, and how citizen science relates to it. A professional scientist will begin an experiment with several possible hypotheses, then test them against the data. Citizen scientists do not necessarily organize their thinking in this way. This raises the question: can the frameworks and research questions of a project be co-produced with public audiences? Or do they have to be determined by a central team of professionals, and farmed out to wider audiences? This is certainly the implication of Jeff Howe’s original framing of crowdsourcing:

“All these companies grew up in the Internet age and were designed to take advantage of the networked world. … [I]t doesn’t matter where the laborers are – they might be down the block, they might be in Indonesia – as long as they are connected to the network.

Technological advances in everything from product design software to digital video cameras are breaking down the cost barriers that once separated amateurs from professionals. … The labor isn’t always free, but it costs a lot less than paying traditional employees. It’s not outsourcing; it’s crowdsourcing.”

So is it the case that citizen science is about abstract research problems – “are golden finches as common in area X now as they were five years ago?” rather than concrete research questions – “why has the population of golden finches declined over the last five years?”

For me, the main takeaway was our recognition citizen science and “conventional” science is not, and should not try to be, the same thing, and should not have the same goals. The important thing in citizen science is not to focus on the “conventional” scientific out comes of good, methodologically sound and peer-reviewable research – that is, at most, an incidental benefit – but on the relationships between professional academic scientists and non-scientists it creates; and how these can help build a more scientifically literate population. The same should go for the citizen humanities. We can all count bird populations, we can all classify galaxies, we call all transcribe handwritten text, but the most profitable goal for citizen science/humanities is a more collaborative social understanding of why doing so matters.

Sourcing GIS data

Where does one get GIS data for teaching purposes? This is the sort of question one might ask on Twitter. However while, like many, I have learned to overcome, or at least creatively ignore, the constraints of 140 characters, it can’t really be done for a question this broad, or with as many attendant sub-issues. That said, this post was finally edged into existence by a Twitter follow, from “Canadian GIS & Geomatics Resources” (@CanadianGIS). So many thanks to them for the unintended prod. The linked website of this account states:

I am sure that almost any geomatics professional would agree that a major part of any GIS are the data sets involved. The data can be in the form of vectors, rasters, aerial photography or statistical tabular data and most often the data component can be very costly or labor intensive.

Too true. And as the university term ends, reviewing the issue from the point of view of teaching seems apposite.

First, of course, students need to know what a shapefile actually is. A shapefile is the building block of GIS, the datasets where individual map layers live. Points, lines, polygons: Cartesian geography are what makes the world go round – or at least the digital world, if we accept the oft-quoted statistic that 80% or all online material is in some way georeferenced. I have made various efforts to establish the veracity of this statistic or otherwise, and if anyone has any leads, I would be most grateful if you would share them with me by email or, better still, in the comments section here. Surely it can’t be any less than that now, with the emergence of mobile computing and the saturation of the 4G smartphone market. Anyway…

In my postgraduate course, part of a Digital Humanities MA programme, on digital mapping, I have used the Ordnance Survey Open Data resources, Geofabrik, an on-demand batch download service for OpenStreetMap data, Web Feature Service data from Westminster City Council, and  continental coastline data from the European Environment Agency. The first two in particular are useful, as they provide different perspectives from respectively the central mapping verses open source/crowdsourced geodata angles. But in the expediency required of teaching a module, they main virtues are the fact they’re free, (fairly) reliable, free, malleable, and can be delivered straight to the student’s machine, or classroom PC (infrastructure problems aside – but that’s a different matter) – and uploaded to a package such as QGIS.  But I also use some shapefiles, specifically point files, I created myself. Students should also be encouraged to consider how (and where) the data comes from. This seems to be the most important aspect of geospatial within the Digital Humanities. This data is out there, it can be downloaded, but to understand what it actually *is*, what it actually means, you have to create it. That can mean writing Python scripts to extract toponyms, considering how place is represented in a text, or poring over Google Earth to identify latitude/longitude references for archaeological features.

This goes to the heart of what it means to create geodata, certainly in the Digital Humanities. Like the Ordnance Survey and Geofabrik, much of the geodata around us on the internet arrives pre-packaged and with all its assumptions hidden from view.  Agnieszka Leszczynski, whose excellent work on the distinction between quantitative and qualitative geography I have been re-reading as part of preparation for various forthcoming writings, calls this a ‘datalogical’ view of the world. Everything is abstracted as computable points, lines and polygons (or rasters). Such data is abstracted from the ‘infological’ view of the world, as understood by the humanities.  As Leszczynski puts is: “The conceptual errors and semantic ambiguities of representation in the infologial world propagate and assume materiality in the form of bits and bytes”[1]. It is this process of assumption that a good DH module on digital mapping must address.

In the course of this module I have also become aware of important intellectual gaps in this sort of provision. Nowhere, for example, in either the OS or Geofabrik datasets, is there information in British public Rights of Way (PROWs). I’m going to be needing this data later in the summer for my own research on the historical geography of corpse roads (more here in the future, I hope). But a bit of Googling turned up the following blog reply from OS at the time of the OS data release in April 2010:

I’ve done some more digging on ROW information. It is the IP of the Local Authorities and currently we have an agreement that allows us to to include it in OS Explorer and OS Landranger Maps. Copies of the ‘Definitive Map’ are passed to our Data Collection and Management team where any changes are put into our GIS system in a vector format. These changes get fed through to Cartographic Production who update the ROW information within our raster mapping. Digitising the changes in this way is actually something we’ve not been doing for very long so we don’t have a full coverage in vector format, but it seems the answer to your question is a bit of both! I hope that makes sense![2]

So… teaching GIS in the arcane backstreets of the (digital) spatial humanities still means seeing what is not there due to IP as well as what is.

[1] Leszczynski, Agnieszka. “Quantitative Limits to Qualitative Engagements: GIS, Its Critics, and the Philosophical Divide∗.” The Professional Geographer 61.3 (2009): 350-365.

[2] https://www.ordnancesurvey.co.uk/blog/2010/04/os-opendata-goes-live/

Reconstruction, visualization and frontier archaeology

Recently on holiday in the North East, I took in two Roman forts of the frontier of Hadrian’s Wall, Segedunum and Arbeia. Both have stories to tell, narratives, about the Roman occupation of Britain, and in the current period both have been curated in various ways. At both, the curating authorities (Tyne and Wear Museums), with ongoing archaeological research being undertaken by the fantastic WallQuest community archaeology project.

The public walkthrough reconstructions of what the buildings and the contents might have been like at both sites pose some interesting questions about the nature of historical/archaeological narratives, and how they can be elaborated. At Segedunum, there is a reconstruction of a bath house. Although the fort itself had such a structure, modern development means that it is not in the same place, nor does the foundations of the reconstruction relate directly to archaeological evidence. The features of the bath house are drawn from composite analysis of bath houses from throughout the Roman Empire. So what we have here is a narrative, but it is a generic narrative: it is stitched together, generalized, a mosaic of hundreds of disparate narratives, but it can only be very loosely constrained by time (a bath house such as that at Segedunum would have had a lifespan of 250-300 years), and not to any one individual. we cannot tell the story of any one Roman officer or auxiliary solider who used it.

Image
Reconstructed bath house at Segedunum

On the other hand at Arbeia, there are three sets of granaries, the visible foundations all nicely curated and accessible to the public. You can see the stone piers and columns that the granary floors were mounted on, to allow air movement to stop the grain rotting. Why three granaries for a fort of no more than 600 occupants? Because in the third century, the Emperor Severus wanted to conquer the nearby Caledonii; and for his push up into Scotland we needed a secure supply base with plenty of grain.

Image
Granaries at Arbeia, reconstructed West Gatehouse in the background

This is an absolute narrative. It is constrained by actual events which are historical and documented. At the same fort is a reconstructed gateway, which is this time situated on actual foundations. This is an inferential narrative, with some of the gateway’s features being reconstructed again from composite evidence from elsewhere (did it have two or three stories? A shingled roof? We don’t know, but we infer). These narratives are supported by annotated scale models in the gateway structure which we, they paying public (actually Arbeia is free), can view and review at our leisure. This speaks to the nature of empirical, inferential and conjectural reconstruction detailed in a forthcoming book chapter by myself and Kirk Woolford (of contributions to the EVA conference, published by Springer).

Narratives are personal, but the can also be generic. In some ways this speaks back to the concept of the Deep Map (see older posts). The walkthrough reconstruction constitutes, I think, half a Deep Map. It provides a full sensory environment, but is not ‘scholarly’ in that it does not elucidate what it would have been like for a first or second century Roman, or auxiliary soldier to experience the environment. Maybe the future of 3D visualization should be to integrate modelling, reconstruction, remediation, and interpretation to bring available (and reputable) knowledge from whatever source about what that original sensory experience would have been – texts, inscriptions, writing tablets, environmental archaeology, experimental archaeology etc. In other words, visualization should no longer be seen as a means of making hypothetical visual representations of what the past might of been, but of integrating knowledge about the experience of the environment derived from all five senses, but using vision as the medium.  It can never be a total representation incorporating all possible experiences under all possible environmental conditions, but then a map can never be a total representation of geography (except, possibly, in the world of Borges’s On the Exactitude of Science).

To crowd-source or not to crowd-source

Shortly before Christmas, I was engaged in discussion with a Swedish-based colleague about crowd-sourcing and the humanities. My colleague – an environmental archaeologist – posited that it could be demonstrated that crowd-sourcing was not an effective methodology for his area. Ask randomly selected members of the public to draw a Viking helmet. You would get a series of not dissimilar depictions – a sort of pointed or semi-conical helmet, with horns on either side. But Viking helmets did not have horns.

Having recently published a report for the AHRC on humanities crowd-sourcing, a research review which looked at around 100 publications, and about the same number of projects, activities, blogs etc, I would say the answer to this apparent fault is: don’t identify Viking helmets by asking the public to draw them. Obvious as this may sound, it is in fact just an obvious example of a complex calculation that needs to be carried out when assessing if crowd-sourcing is appropriate for any particular problem. Too often, we found in our review, crowd-sourcing was used simply because there was a data resource there, or some infrastructure which would enable it, and not because there was a really important or interesting question that could be posed by engaging the public – although we found honourable exceptions to this. Many such projects contributed to the workshop we held last May, which can be found here. To help identify which sorts of problems would be appropriate, we have developed – or rather, since this will undoubtedly involve in the future, I should say we are developing – a four facet typology of humanities crowd-sourcing scenarios. These facets are asset type (the content or data forming the subject of the activity), process type (what is done with that content) task type (how it is done), and the output type (the thing, resource or knowledge produced). What we are now working on is identifying – or trying to identify – examples of how these might fit together to form successful crowd-sourcing workflows.

To put it in the terms of my friend’s challenge: an accurate image of a Viking helmet is not an output which can be generated by setting creative tasks to underpin the process of recording and creating content, and the ephemeral and unanchored public conception of what a Viking helmet looks like is not an appropriate asset to draw from. Obvious as this may sound, it hints that a systematic framework for identifying where crowd-sourcing will, and won’t, work, is methodologically possible. And this could, potentially, be very valuable as the humanities faces increasing interest from well-organized and well-funded citizen science communities such as Zooniverse (which already supports and facilitates several of the early success stories in humanities crowd-sourcing such as Ancient Lives and OldWeather).

This of course raises a host of other issues. How on earth can peer-review structures cope with this, and should they try to? What motivates the public, and indeed academics, to engage with crowd-sourcing? We hint at some answers. Transparency and documentation is essential for the former area, and we found that in the latter, most projects swiftly develop a core community of very dedicated followerswho undertake reams of work, but – possibly like many more conventional collaborations – finding those people, or letting them find you, is not always easy.

The final AHRC report is available: Crowdsourcing-connected-communities.

Last day in Indiana

It’s my last day in Indianapolis. It’s been hard work and I’ve met some great people. I’ve experienced Indianapolis’s hottest day since 1954, and *really* learned to appreciate good air conditioning. Have we, in the last two weeks, defined what a deep map actually is? In a sense we did, but more importantly than the semantic definition, I reckon we managed to form a set of shared understandings, some fairly intuitive, which articulate (for me at least) how deep mapping differs from other kinds of mapping. It must integrate, and at least some of this integration must involve the linear concepts of what, when and where (but see below). It must reflect experience at the local level as well as data at the macro level, and it must provide a means of scaling between them. It must allow the reader (I hereby renounce the word ‘user’ in relation to deep maps) to navigate the data and derive their own conclusions. Unlike a GIS – ‘so far so Arc’ is a phrase I have co-coined this week – it cannot, and should not attempt to, actualize every possible connection in the data, either implicitly or explicitly. Above all, a deep map must have a topology that enables all these things, and if, in the next six months, the Polis Center can move us towards  a schema underlying that topology, then I think our efforts, and theirs, will have been well rewarded.

The bigger questions for me are what does this really mean for the ‘spatial humanities’; and what the devil are the spatial humanities anyway. They have no Wikipedia entry (so how can they possibly exist?). I have never particularly liked the term ‘spatial turn’, as it implies a setting apart, which I do not think the spatial humanities should be about. The spatial humanities mean nothing if they do not communicate with the rest of the humanities, and beyond. Perhaps – and this is the landscape historian in me talking – it is about the kind of topology that you can extract from objects in the landscape itself. Our group in Week 2 spent a great deal of time thinking about the local and the experiential, and how the latter can be mapped on to the former, in the context of a particular Unitarian ministry in Indianapolis. What are the stories you can get from the landscape, not just tell about it.

Allow me to illustrate the point with war memorials. The city’s primary visitor information site, visitindy.com, states that Indianapolis has more war memorials than any city apart from Washington D.C.. Last Saturday, a crew of us hired a car and visited Columbus IN, an hour and a half’s drive away. In Columbus there is a memorial to most of America’s wars: eight by six Indiana limestone columns, arranged in a close grid formation with free public access from the outside. Engraved on all sides of the columns around the outside, except the outer facing edges, are names of the fallen, their dates, and the war in which they served. On the inner columns– further in, where you have to explore to find them, giving them the mystique of the inner sanctum – are inscribed the full texts of letters written home by fallen servicemen. In most cases, they seem to have been written just days before the dates of death.  The deeply personal natures of these letters provide an emotional connection, and combined with the spatiality of the columns, this connection forms a very specific, and very deliberately told, spatial narrative. It was also a deeply moving experience.

Today, in Indianapolis itself, I was exploring the very lovely canal area, and came across the memorial to the USS Indianapolis. The Indianapolis was a cruiser of the US Navy sunk by Japanese torpedoes in 1945, with heavy loss of life. Particular poignancy is given to the memorial by a narrative of the ship’s history, and the unfolding events leading up to the sinking, inscribed in prose on the monument’s pedestal. I stood there and read it, totally engrossed and as moved by the story as I was by the Columbus memorial.

Image
USS Indianapolis memorial

The point for deep maps: American war memorials tell stories in a very deliberate, designed and methodical way, to deeply powerful effect in the two examples I saw. British war memorials tend not to do this. You get a monument, lists of names of the fallen and the war in question, and perhaps a motto of some sort. An explicit story is not told. This does not make the experience any less moving, but it is based on a shared and implicit communal memory, whose origins are not made explicit in the fabric of the monument. It reflects a subtle difference in how servicemen and women are memorialized, in the formation of the inherently spatial stories that are told in order to remember them.

This is merely one example of subtle differences which run through any built environment of any period in any place, and they become less subtle as you scale in more and more cultures with progressively weaker ties. Britain and America. Europe, Britain and America. Europe, America and Africa, and so on. We scale out and out, and then we get to the point where the approaches to ‘what’ ‘when’ and ‘where’ – the approaches that we worked on in our group – must be recognised not as universal ways of looking at the world, but as products of our British/American/Australian backgrounds, educations and cultural memories. Thus it will be with any deep map.

How do we explain to the shade of Edward Said that by mapping these narratives we are not automatically claiming ownership of them, however much we might want or try not to? How deep will these deep maps need to go…?