Deep maps in Indy

I am here in a very hot and sunny Indianapolis trying to figure out what is meant by deep mapping, with an NEH Summer Institute at UIPUI hosted by the Polis Center here. There follows a very high-level attempt to synthesize some thoughts from the first week.

Deep mapping – we think, although we’ll all probably have changed our minds by next Friday, if not well before  – is about representing (or, as I am increasingly preferring to think, remediating) the things that Ordnance Survey would, quite rightly, run a perfectly projected and triangulated mile from mapping at all. Fuzziness. Experience. Emotion. What it means to move through a landscape at a particular time in a particular way. Or, as Ingold might say, to negotiate a taskscape. Communicating these things meaningfully as stories or arguments. There has been lots of fascinating back and forth about this all week, although – and this is the idea at least – next week we move a beyond the purely abstract and grapple with what it means to actually design one.

If we’re to define the meaning we’re hoping to build here, it’s clear that we need to rethink some pretty basic terms. E.g. we talk instinctively about ‘reading’ maps, but I have always wondered how well that noun and that verb really go together. We assume that ‘deep mapping’ for the humanities – a concept which we assume will be at least partly online – has to stem from GIS, and that a ‘deep map, whatever we might end up calling that, will be some kind of paradigm shift beyond ‘conventional’ computer mapping. But the ’depth’ of a map is surely a function of how much knowledge – knowledge rather than information – is added to the base layer, where that information comes from, and how it is structured. The amazing HGIS projects we’ve seen this week give us the framework we need to think in, but the concept of ‘information’ therein should surely be seen as a starting point. The lack of very basic kinds of such information in popular mapping applications has been highlighted, and perhaps serves to illustrate this point. In 2008, Mary Spence, President of the British Cartographic Society, argued in a lecture:

Corporate cartographers are demolishing thousands of years of history, not to mention Britain’s remarkable geography, at a stroke by not including them on [GPS] maps which millions of us now use every day. We’re in danger of losing what makes maps so unique, giving us a feel for a place even if we’ve never been there.

To put it another way, are ‘thin maps’ really all that ‘thin’, when they are produced and curated properly according to accepted technical and scholarly standards? Maps are objects of emotion, in a way that texts are not (which is not to deny the emotional power of text, it is simply to recognize that it is a different kind of power). Read Mike Parker’s 2009 Map Addict for an affectionate and quirky tour of the emotional power of OS maps (although anyone with archaeological tendencies will have to grit their teeth when he burbles about the mystic power of ley lines and the cosmic significance of the layout of Milton Keynes). According to Spence, a map of somewhere we have never been ties together our own experiences of place, whether absolute (i.e. georeferenced) or abstract, along with our expectations and our needs. If this is true for the lay audiences of, say the Ordnance Survey, isn’t the vision of a deep map articulated this past week some sort of scholarly equivalent? We can use an OS map to make a guess, an inference or an interpretation (much discussion this week has, directly or indirectly, focused on these three things and their role in scholarly approaches). What we cannot do with an OS map is annotate or embed it with any of these. The defining function of a deep map, for me, is an ability to do this, as well as the ability to structure the outputs in a formal way (RDF is looking really quite promising, I think – if you treat different mapped objects in the object-subject-predicate framework, that overcomes a lot of the problems of linearity and scale that we’ve battled with this week). The different levels of ephemerality that this would mean categorising (or, heaven help us, quantifying) is probably a story for another post, but a deep map should be able to convey experience of moving through the landscape being described.

There are other questions which bringing such a map into the unforgiving world of scholarly publication would undoubtedly entail. Must a map be replicable? Must someone else be able to come along and map the same thing in the same way, or at least according to their own subjective experience(s)?  In a live link up the UCLA team behind the Roman Forum project demonstrated their stuff, and argued that the visual is replicable and –relatively easily – publishable, but of course other sensory experiences are not.  We saw, for example, a visualisation of how far an orator’s voice could carry. The visualisation looks wonderful, and the quantitative methodology even more so, but to be meaningful as an instrument in the history of Roman oratory, one would have to consider so many subjective variables – the volume of the orator’s voice (of course), the ambient noise and local weather conditions (especially wind). There are even less knowable functions, such as how well individuals in the crowd could hear, whether they had any hearing impairments etc. This is not to carp –after all, we made (or tried to make) a virtue of addressing and constraining such evidential parameters in the MiPP project, and our outputs certainly looked nothing like as spectacular as UCLA’s virtual Rome – but a deep map must be able to cope with those constraints.

To stand any chance of mapping them, we need to treat such ephemera as objects, and object-orientation seemed to be where our – or at least my – thinking was going last week. And then roll out the RDF…

CAA1 – The Digital Humanities and Archaeology Venn Diagram

The question  ‘what is the digital humanities’ is hardly new; nor is discussion of the various epistemologies of which the digital humanities are made. However, the relationship which archaeology has with the digital humanities – whatever the epistemology of either – has been curiously lacking. Perhaps this is because archaeology has such strong and independent digital traditions, and such a set of well-understood quantitative methods, that the close analysis of of those traditions – familiar to readers of Humanist, say –  seem redundant. However, at the excellent CAA International conference in Southampton last week, there was a dedicated round-table session on the ‘Digital Humanities/Archaeology Venn Diagram’, in which I was a participant. This session highlighted that the situation is far more nuanced and complex that it first seems. As is so often the case with digital humanities.

A Venn Diagram, of course, assumes two or more discrete groups of objects, where some objects contain the attributes of only one group, and others share attributes of multiple groups. So – assuming that one can draw a Venn loop big enough to contain the digital humanities – what objects do they share with archaeology? As I have not been the first to point out, digital humanities is mainly concerned with methods. This, indeed, was the basis of Short and McCarty’s famous diagram. The full title of CAA – Computer Applications and Quantitative Methods in Archaeology – suggests that a methodological focus is one such object shared by both groups. However unlike the digital humanities, archaeology is concerned with a well defined set of questions. Most if not all, of these questions derive from ‘what happened in the past?’. Invariably the answers lie, in turn, in a certain class of material; and indeed we refer to collectively to this class as ‘material culture’.  And digital methods are a means that we use to the end of getting at the knowledge that comes from interpretation of material culture.

The digital humanities have much broader shared heritage which, as well as being methodological, is also primarily textual. This fact is illustrated by the main print publication in the field being called Literary and Linguistic Computing. It is not, I think, insignificant as an indication of how things have moved on that that a much more recently (2007)  founded journal has the less content-specific title Digital Humanities Quarterly. This, I suspect, is related to the reason why digitisation so often falls between the cracks in the priorities of funding agencies: there is a perception that the world of printed text is so vast that trying to add to the corpus incrementally would be like painting the Forth Bridge with a toothbrush (although this doesn’t affect my general view that the biggest enemy of mass digitisation today is not FEC or public spending cuts, but the Mauer im Kopf that form notions of data ownership and IPR). The digital humanities are facing a tension, as they always have, between variable availability of digital material, and the broad access to content that any porting over to the ‘digital’ that the word ‘humanities’ implies. As Stuart Jeffrey’s talk in the session made clear, the questions facing archaeology are more about what data archaeologists throw away: the emergence of Twitter, for example, gives an illusion of ephemerality, but every tweet adds to the increasing cloud of noise on the internet; and those charged with preserving the archaeological record in digital form must decide where where the noise ends and the record begins.

There is also the question of what digital methods *do* to our data. Most scholars who call themselves ‘digital humanists’ would reject the notion that textual analysis, which begins with semantic and/or stylometric mark-up is a purely quantitative exercise; and that qualitative aspects of reading and analysis arise from, and challenge, the additional knowledge which is imparted to a text in the course of encoding by an expert. However, as a baseline, it is exactly the kind of quantitative  reading of primary material which archaeology – going back to the early 1990s – characterized as reductionist and positivist. Outside the shared zone of the Venn diagram, then, must be considered the notions of positivism and reductionism: they present fundamentally different challenges to archaeological material than they do to other kinds of primary resource, certainly including text, but also, I suspect, to other kinds of ‘humanist’ material as well.

A final point which emerged from the session is the disciplinary nature(s) of archaeology and the digital humanities themselves. I would like to pose the question as to why the former is often expressed as a singular noun whereas the latter is a plural. Plurality in ‘the humanities’ is taken implicitly. It conjures up notions of a holistic liberal arts education in the human condition, taking in the fruits of all the arts and sciences in which humankind has excelled over the centuries. But some humanities are surely more digital than others. Some branches of learning, such as corpus linguistics, lend themselves to quantitative analysis of their material. Others tend towards the qualitative, and need to be prefixed by correspondingly different kinds of ‘digital’. Others are still more interpretive, with their practitioners actively resisting ‘number crunching’. Therefore, instead of being satisfied with ‘The Digital Humanities’ as an awkward collective noun, maybe we could look to free ourselves of the restrictions of nomenclature by recognizing that can’t impose homogeneity, and nor should we try to. Maybe we could even extend this logic, and start thinking in terms of ‘digital archaeologies’; of branches of archaeology which require (e.g.) archiving, communication, semantic web, UGC and so on; and some which don’t require any.  I can’t doubt that the richness and variety of the conference last week is the strongest argument possible for this.