Tales of many places: Data Infrastructure for Named Entities

The use of computational methods for ancient world geography are still very much dominated by the URI based gazetteer. These powerful and flexible reference lists, trail-blazed by projects such as the Pleaides and Pelagios projects, allow resources to be linked by common spatial referents they share. However, while computers love URIs unconditionally, the relationship they have with place is more ambivalent: a simmering critical tension which has given rise to what we call the Spatial Humanities. This critical tension between the ways humanists see place and the way computers deal with it has highlighted important geo-philosophical principles for the study of the ancient world. For me, one of the most important of these is the principle that places as entities which exist in some form of human discourse such as text, and places as locations which can be situated within the (modern) framework of latitude and longitude, must be separated. Gazetteers allow us to do this, which is why they are so important.

My 2017 kicked off with a meeting in a snowy Leipzig (see above), Digital Infrastructure for Named Entities Data, which sought to further problematize the use of these computational methods to support the investigation of past place. As might be expected of an event driven by Pelagios, the use of URI-based gazetteers featured heavily. The Pleagios Commons was presented by the event’s organizer, Chiara Palladino, as both a community and an infrastructure. It centres on the general concept of “place”, and clusters of material which share the same properties. Pelagios may be seen, Chiara said, as the “Connecting structure behind the system”, aiming at a decentralized and federated approach to provide maps which combine geographical, chronological and biographical data. The event’s exploration of this key, overarching concept highlighted three main issues:

  1. Hodological views of past space

Ancient geographies should be seen in the context of hodological space – as pathways through the world, not points on top of it. Hodology, a concept discussed by several speakers, views space from the perspective of experience and mobility.  Hodological space concerns the tension between intent, possibility, and real (embodied) experience. It is frequently bidimensional, as evidenced in the example given by Sergio Brilliante, of western Crete in the Periplus (mariner’s account) of Pseudo-Skylax, which displayed the best route for travel, not the cartographically optimal one. I was struck by the modern parallel of the WWII Cretan “runner”, George Psychoundakis, who, in his riveting account of his role in the resistance in Crete, measured the distances over which his wartime missions took him on foot by the number of cigarettes he smoked on the journey.

It was noted that in Arabic scripts, geographic areas are generally not measured, except for the purposes of agriculture. A hodological approach was described as a counterpoint to “scientific method” in geography: one can frame geographic accuracy either in terms of “accurate” Cartesian maps, or as the consistent application of geo criteria.

  1. Name neutrality

Like any form of humanistic space, hodological space is never neutral. Place references in humanistic discourse are often the result of mutivocal, multi-authorial and partial accounts; and the workshop bore a heavy emphasis on this. Many surviving Classical texts are written by Greek or Athenian authors, so there is a strong Athenocentricism and Graecocentricism to them. Non-Greeks tend to be “hidden”. This seemed to me somewhat reminiscent of the Mercator projection (which most modern Web cartography relies upon), which “shrinks” mid-latitude countries and accentuates those at higher and lower latitudes, thus visually privileging the developed world at the expense of the developing countries (who could forget the scene in the West Wing when the Cartographers for Social Equality regale CJ Cregg on the subject). Similarly toponyms are not neutral, a problem which the separating of platial concept and platical location can help address. Our own Heritage Gazetteer of Cyprus is attempting to do this through application of “attestations” of agnostic geographic entities, an approach also being used by  Sinai Rusinek in her Hebrew gazetteer. Similarly Thomass Carlson described the Syriaca.org gazetteer, which links cultural heritage to texts in the Syriac language. Carlson noted that names are a linguistic strategy not absolute entities.  The nature of names means that disambiguation does not work consistently. Even an expert reader might not be able to determine out what exactly a toponym refers to. While many ancient world gazetteers rely on URIs, URIs can never replace unambiguous linguistic names.  Context free URIs, which the gazetteer community has long relied on, are no longer sufficient to represent non-neutral humanistic place.

  1. Ontological (mis)alignment

Finally, a point well made by Maurizio Lana was that geographical ontologies must be bottom up to be truly representative. In his presentation he described the Geolat project, which deals with the use of spatial ontologies, and again frames names as cultural patterns. There is a driving force which pulls readers towards names, to what is easily identifiable. It is necessary to separate the study of entities from naming. This means that an ontology that is developed for one purpose might not be suitable for others. For example, in the Heritage Gazetteer of Cyprus we make use of Geonames as a means of locating archaeological entities, but the Feature Type list of Geonames is not nearly detailed or granular enough to adequately describe the different kinds of features which exist in the gazetteer. Therefore where geo-ontologies have come from, and why they do not align, can lead to very interesting conclusions about the nature of historical spatial structures.

As often, there was a great background discussion with colleagues who were not physically present via Twitter, which I have captured as a raw Storify. Among the most engaging of these discussions was an exchange as to whether a place had to have a name, or rather whether place acts as a conceptual container for events (in which case what are they?). My previous belief in the former position found itself severely tested by this exchange, and the papers which touched on hodological views of the past provided reinforcements. I think I am now a follower of the latter view. Thank you to those Twitter friends for this, you know who you are.

Talking to ourselves: Crowdsourcing, Boaty McBoatface and Brexit

Back in April, I gave a talk at a symposium entitled Finding New Knowledge: Archival Records in the Age of Big Data in Maryland called “Of what are they a source? The Crowd as Authors, Observers and Meaning-Makers”. In this talk I made the point that 2016 marked ten years since Jeff Howe coined the term “crowdsourcing” as a pastiche of “outsourcing” in his now-famous Wired piece. I also talked about the saga of “Boaty McBoatface”, then making headlines in the UK. If you recall, Boaty McBoatface was the winner, with over 12,000 votes, of the Natural Environmental Research Council’s open-ended appeal to “the crowd” to suggest names for its new £200m polar research ship, and vote on the suggestions. I asked if the episode had anything to tell us about where crowdsourcing had gone in its first ten years.  Well, we had a good titter at poor old NERC’s expense (although in fairness I did point out that, in a way, it was wildly successful as a crowdsourcing exercise – surely global awareness of NERC’s essential work in climatology and polar research has never been higher). In my talk I suggested the Boaty McBoatface episode was emblematic of crowdsourcing in the hyper-networked age of social media. The crowdsourcing of 2006 was based, yes, on networks, enabled by the emerging ubiquity of the World Wide Web, but it was a model where “producers” – companies with T-Shirts to design (Howe’s example), astrophysicists with galaxy images to classify (the Zooniverse poster child of citizen science), or users of Amazon Mechanical Turk put content online, and entreated “the crowd” to do something with it. This is interactivity at a fairly basic level. But the 2016 level of web interactivity is a completely different ball game, and it is skewing attitudes to expertise and professionalism in unexpected and unsettling ways.

The relationship between citizen science (or academic crowdsourcing) and “The Wisdom of Crowds” has always been a nebulous one. The earlier iterations of Transcribe Bentham, for example, or Old Weather, are not so much exercises in crowd wisdom, but perhaps “crowd intelligence” – the execution of intelligent tasks that a computer could not undertake. These activities (and the numerous others I examined with Mark Hedges in our AHRC Crowd-Sourcing Scoping Survey four years) ago all involve intelligent decision making, even if it is simply an intelligent decision as to how a particular word in Bentham’s papers should be transcribed. The decisions are defined and, to differing degrees, constrained by the input and oversight of expert project members, which give context and structure to those intelligent decisions: a recent set of interviews we have conducted with crowdsourcing projects have all stressed the centrality of a co-productive relationship between professional project staff and non-professional project participants (“volunpeers”, to use the rather wonderful terminology of the Smithsonian Transcription Center’s initiative).

However events since April have put the relationship between “the crowd” and “the expert” on to the front pages on a fairly regular basis. Four months ago, the United Kingdom voted by the small but decisive margin of 51.9% to 48.1% to exit the European Union. The “Wisdom of [the] Crowd” in making this decision informed much of the debate in the run up to the vote, with the merits of “crowd wisdom” versus “expert wisdom” being a key theme. Michael Gove, a politician who turned out to be too treacherous even for a Conservative party leadership election, famously declared that “Britain has had enough of experts”. It is a theme that has persisted since the vote, placing the qualification obtained from the act of representing “ordinary people” through election directly over, say, the economic expertise of the Governor of the Bank of England:

Is this fault line between the expert and the crowd real, a social division negotiated by successful academic crowdsourcing projects, or is it merely a conceit of divisive political rhetoric?  Essentially, this is a question of who “produces” wisdom, and who “consumes” it, and in which direction do the cognitive processes which lead to decision making flow (and which way should they flow?). This highlights the nebulous and inexact definition of “the crowd”. It worked pretty well ten years ago when Howe wrote his article, and translated easily enough into the “crowd intelligence” paradigm of the late 2000s, and early academic crowdsourcing. In these earlier days of Web 2.0, it was still possible to make at least a scalar distinction between producers and consumers, between the crowd and the crowdsourcer (or the outsourcer and organization outsourced to, to keep with his metaphor); even though the role of the user as a creator and a consumer of content was changing (2006 was, after all, also the year in which Facebook and Twitter launched). But how about today? This is a question raised by a recent data analysis of Brexit by the Economist. In this survey of voters’ opinions, it emerges that over 80% of Leave voters stated that they had “more faith in the wisdom of ordinary people than the opinions of experts”. I find the wording of this question fascinating, if not a little loaded – after all, is it not reasonable to place one’s faith in any kind of “wisdom” than an “opinion”? But the implicit connection between generally a generally held belief and (crowd) wisdom is antithetical to independent decision making. This is crucial to any argument that “crowd wisdom” leads to better decisions – such as leaving the EU. In his 2004 book, The Wisdom of Crowds: Why the Many Are Smarter Than The Few, James Surowiecki talks of “information cascades” being a threat to good crowd decisions. In information cascades, people rely on ungrounded opinions of others that have gone before: the more opinions, the more ongoing, self-replicating reinforcement. Surowiecki says:

Independence is important to intelligent decision making for two reasons. First, it keep (sic) the mistakes that people make from becoming correlated … [o]ne of the quickest ways to make people’s judgements systematically biased is to make them dependent on each other for information. Second, independent individuals are more likely to have new information rather than the same old data everyone is already familiar with.

According to the Economist’s data, the Brexit vote certainly has some of the characteristics of information cascade as described by Surowiecki: many of those polled who voted that way did so at least in part of their faith in the “wisdom of ordinary people”. This is the same self-replicating logic of the NERC boat naming competition which led to Boaty McBoatface; and a product of the kind of closed-loop thinking which social media represents. Five years ago, the New Scientist reported a very similar phenomenon with different kinds of hashtags – depending on the kind of community involved, some (#TeaParty in their example) develop great traction among distinct groups of mutual followers with individuals tweeting to one another, whereas others (#OccpyWallStreet in this case) attract much greater engagement from those not already engaged. It’s a pattern that comes up again and again, and surely Brexit is a harbinger of new ways in which democracy works.

It is certainly embodies and represents the information cascade as one key aspect that Surowiecki would have us believe is not the Wisdom of Crowds as a means of making “good” decisions. There may be those who say that to argue this is to argue against democracy, that there are no “good” or “bad” decisions, only “democratic” ones.  That is completely true of course; and not for a moment here do I question the democratic validity of the Brexit decision itself. I also happen to believe that millions of Leave voters are decent, intelligent, honourable people who genuinely voted for what, in their considered opinion, was the best for the country. But since the Goves of the world made a point and a virtue of placing the Leave case in opposition to the “opinions of experts”, it becomes legitimate to ask questions about the cognitive processes which result from so doing. And the contrast of this divisive rhetoric with those constructive and collaborative relationships between experts and non-experts evident from academic crowdsourcing could not be greater.

But that in turn makes one ask how useful the label “expert” really is. What, in the rhetoric of Gove, Davies etc, actually consigns any individual person to this reviled category? Is it just anyone who works in a university or other professional organization? Who is and who is not an expert is a matter of circumstance and perspective, and it shifts and changes all the time. Those academic crowdsourcing projects understand that, which is why they were so successful. If only politics could take the lesson.

 

Quantitative, Qualitative, Digital. Research Methods and DH

This summer, there was an extensive discussion on the Humanist mailing list about the form and nature of research methods in digital humanities. This matters, as it speaks in a fundamental way to a question whose very asking defines Digital Humanities as a discipline: when does the development and use of a tool become a method, or a methodology? The thoughts and responses this thread provoked is testament to the importance of this question.  While this post does not aim to offer a complete digest of this thread, I wanted to highlight a couple of key points that emerged from it. A key theme was in one exchange, which concerned the point in any research activity which employs digital tools at which human interpretation enters. Should this be the creation of tools, the design of those tools, the adding of metadata, the design of metadata, and so on. If one is creating a set of metadata records relating to a painting with reference to “Charles I” (ran an example given by Dominic Oldman), the computer would not “understand” the meaning of any information provided by the user, and any subsequent online aggregation would be similarly knowledge-agnostic.

In other words, where should human knowledge in the Digital Humanities lie? In the tool, or in the data, or both?

Whatever the answer, the key aspect is the point at which a convention in the use of a particular tool becomes a method. In a posting to the thread on 25th July, Willard McCarty stated:

The divergence is over the tendency of ‘method’ to become something fixed. (Consider, for example, “I have a method for doing that.” Contrast “What if I try doing this?”).

“Fixedness” is essential, and it implies some form of critically-grounded consensus among those using the method in question. This is perhaps easier to see in the social sciences that it is in the [Digital] humanities. For example, how would a classicist, or an historian, or a literature scholar approaching manuscripts through the method of close reading present and describe that method in the appropriate section of the paper? How would this differ from, say the equivalent section in a paper by a social scientist using grounded theory to approach a set of interviews? While there may be no differentiation in the rigour or quality of the research, but one suspects the latter would have a far greater consensus – and body of methodological literature – to draw upon to describe grounded theory, than the former would to describe close reading.

Many discussions on this subject remain content-focused still. What content means in itself has assumed a broader aspect. Whereas “content” in the DH may once have meant digitized texts, images and manuscripts, surely now it also includes web content such as tweets, transient social media, and blog posts such as this one. It is essential to continue to address the DH research life-cycle, as based on content, but I still but we need to tackle explicitly methodology (emphasis deliberate), in both its definition and epistemology, and defined by the presence of fixity, as noted by McCarty.” Methodological pluralism”, the key theme of the thread on Humanist this summer, is great, but for there to be pluralism, there must first be singularity. As noted, the social sciences have this in a very grounded way. I have always argued that the very terms “quantitative” and “qualitative” are understood, shared, written about and, ultimately, used in a much more systematic way in the social sciences than in the (digital) humanities, where they are often taken to express a simple distinction between “something than can be computed versus something that cannot”.

I am not saying this is not a useful distinction, but surely the Humanist thread shows that the DH should at least deepen the distinction to mean “something which can be understood by a computer versus something that cannot”.

I would like to pose three further questions on the topic:

1) how are “technological approaches” defined in DH – e.g. the use of a tool, the use of a suite of tools, the composite use of a generic set of digital applications?

2) what does a “technological approach” employing one or more tools enable us to do?

3) how is what we do with technology a) replicable and b) documentable?

Sourcing GIS data

Where does one get GIS data for teaching purposes? This is the sort of question one might ask on Twitter. However while, like many, I have learned to overcome, or at least creatively ignore, the constraints of 140 characters, it can’t really be done for a question this broad, or with as many attendant sub-issues. That said, this post was finally edged into existence by a Twitter follow, from “Canadian GIS & Geomatics Resources” (@CanadianGIS). So many thanks to them for the unintended prod. The linked website of this account states:

I am sure that almost any geomatics professional would agree that a major part of any GIS are the data sets involved. The data can be in the form of vectors, rasters, aerial photography or statistical tabular data and most often the data component can be very costly or labor intensive.

Too true. And as the university term ends, reviewing the issue from the point of view of teaching seems apposite.

First, of course, students need to know what a shapefile actually is. A shapefile is the building block of GIS, the datasets where individual map layers live. Points, lines, polygons: Cartesian geography are what makes the world go round – or at least the digital world, if we accept the oft-quoted statistic that 80% or all online material is in some way georeferenced. I have made various efforts to establish the veracity of this statistic or otherwise, and if anyone has any leads, I would be most grateful if you would share them with me by email or, better still, in the comments section here. Surely it can’t be any less than that now, with the emergence of mobile computing and the saturation of the 4G smartphone market. Anyway…

In my postgraduate course, part of a Digital Humanities MA programme, on digital mapping, I have used the Ordnance Survey Open Data resources, Geofabrik, an on-demand batch download service for OpenStreetMap data, Web Feature Service data from Westminster City Council, and  continental coastline data from the European Environment Agency. The first two in particular are useful, as they provide different perspectives from respectively the central mapping verses open source/crowdsourced geodata angles. But in the expediency required of teaching a module, they main virtues are the fact they’re free, (fairly) reliable, free, malleable, and can be delivered straight to the student’s machine, or classroom PC (infrastructure problems aside – but that’s a different matter) – and uploaded to a package such as QGIS.  But I also use some shapefiles, specifically point files, I created myself. Students should also be encouraged to consider how (and where) the data comes from. This seems to be the most important aspect of geospatial within the Digital Humanities. This data is out there, it can be downloaded, but to understand what it actually *is*, what it actually means, you have to create it. That can mean writing Python scripts to extract toponyms, considering how place is represented in a text, or poring over Google Earth to identify latitude/longitude references for archaeological features.

This goes to the heart of what it means to create geodata, certainly in the Digital Humanities. Like the Ordnance Survey and Geofabrik, much of the geodata around us on the internet arrives pre-packaged and with all its assumptions hidden from view.  Agnieszka Leszczynski, whose excellent work on the distinction between quantitative and qualitative geography I have been re-reading as part of preparation for various forthcoming writings, calls this a ‘datalogical’ view of the world. Everything is abstracted as computable points, lines and polygons (or rasters). Such data is abstracted from the ‘infological’ view of the world, as understood by the humanities.  As Leszczynski puts is: “The conceptual errors and semantic ambiguities of representation in the infologial world propagate and assume materiality in the form of bits and bytes”[1]. It is this process of assumption that a good DH module on digital mapping must address.

In the course of this module I have also become aware of important intellectual gaps in this sort of provision. Nowhere, for example, in either the OS or Geofabrik datasets, is there information in British public Rights of Way (PROWs). I’m going to be needing this data later in the summer for my own research on the historical geography of corpse roads (more here in the future, I hope). But a bit of Googling turned up the following blog reply from OS at the time of the OS data release in April 2010:

I’ve done some more digging on ROW information. It is the IP of the Local Authorities and currently we have an agreement that allows us to to include it in OS Explorer and OS Landranger Maps. Copies of the ‘Definitive Map’ are passed to our Data Collection and Management team where any changes are put into our GIS system in a vector format. These changes get fed through to Cartographic Production who update the ROW information within our raster mapping. Digitising the changes in this way is actually something we’ve not been doing for very long so we don’t have a full coverage in vector format, but it seems the answer to your question is a bit of both! I hope that makes sense![2]

So… teaching GIS in the arcane backstreets of the (digital) spatial humanities still means seeing what is not there due to IP as well as what is.

[1] Leszczynski, Agnieszka. “Quantitative Limits to Qualitative Engagements: GIS, Its Critics, and the Philosophical Divide∗.” The Professional Geographer 61.3 (2009): 350-365.

[2] https://www.ordnancesurvey.co.uk/blog/2010/04/os-opendata-goes-live/

Question: (how) do we map disappeared places?

A while ago I asked Twitter if there was a name for a long period of inactivity on blogs or social media. Erik Champion came up with some nice suggestions

which raise questions about whether blogging represents either the presence or absence of ‘loafing’; and  replied with a certain elegant simplicity:

Anyway, having been either ‘living’ or ‘loafing’ a lot these last few months, this is my first post since February.

I want to ask another question, but 140 characters just won’t cut it for this one. How does one represent a place in a gazetteer, or any other kind of database or GIS for that matter, which no longer exists? To take an example of ‘Mikro Kaimeni’, a tiny volcanic island in the Santorini archipelago mapped and published by Thomas Graves in his 1850 military survey of the Aegean:

20150610_110846

Some sixteen years after this map was made, Santorini erupted and Mikro Kaimeni combined with the large central island, Neo Kameni:

osm_santorini

Can such places be hermenutic objects by virtue of the fact that they are representing in the human record (in this case Graves’s map), even though they no longer exist as spatial footprints on the earth’s surface? I suppose they have to be. The same could go for fictional places (Middle Earth, Gotham City etc). What kind of representational issues does this create for mapping in the humanities more generally?

Digital Destinations: What to do with a digital MA

King’s Careers & Employability gathers statistics on graduate employment destinations for the Higher Education Statistics Agency (HESA).  Such data is available for the Department of Digital Humanities’ cohorts for the three academic years between 2010/11 and 2012/13, that is to say graduates of the MA Digital Humanities, the MA Digital Asset and Media Management and the MA Digital Culture and Society of those years. This information, which includes the sectors and organizations that alumni enter, and their job titles, is gathered from telephone interviews and online surveys six months after their graduation. Of those who graduated in 2012/13, 93.8% were in full time work, with the remainder undertaking further study in some form. 38.4% of those approached did not reply, or refused to provide answers. A certain health warning must therefore be attached to the information currently available; and in the last couple of years the numbers on all three programmes have grown considerably, so the sample size is small compared to the numbers of students currently taking the degrees. But in surveying the data that we do have, it is possible to make some preliminary observations.

Firstly, the good news is that all of our graduates from 2012/13 who responded to the survey were in employment, or undertaking further studies, within those six months. In the whole three-year period, MA DAMM graduates entered the digital asset management profession via corporations including EMAP, and the university library sector (Goldsmiths College).  They also entered managerial roles at large corporations including Coca-Cola and the Wellcome Trust. Digital media organizations feature strongly in MA DCS students’ destinations, with employers including NBC, Saatchi and Saatchi and Lexisnexis UK, with roles including design, social media strategy and technical journalism. Librarianship is also represented, with one student becoming an Assistant Librarian at a very high-profile university library. Others appear to have gone straight in to quite senior roles. These include a Director of Marketing, PR and Investments at an international educational organization, a Senior Strategy Analyst at a major international media group, and a Senior Project Manager at a London e-consultancy firm. One nascent trend that can be detected is that graduates of MA DH seem more likely to stay in the research sector.  Several HE institutions feature in MA DH destinations, including Queen Mary, the University of Oslo, Valencia University, the Open University and the University of London, as well as King’s itself; although graduates entering these organizations are doing so in technical and practical, roles such as analysts and e-learning professionals, rather than as higher degree research students. A US Office of the State Archaeologist, Waterstones and Oxford University Press also feature, reflecting (perhaps) MA DH’s strengths in publishing and research communication. Many of the roles which MA DH graduates enter are specialized, for example Data Engineer, Conservator Search Engine Evaluator, although more junior managerial jobs also figure.

As noted, the figures on which these observations are based must be treated with some caution; and doubtless as data for 2013/4 and beyond become available, clearer trends will emerge from across the three MA programmes. Currently, there is a range of destinations to which our graduates go, spanning the private and research sectors, and there is much overlap in the types of organizations for which graduates from all three programmes work and the roles they obtain. However, two broad conclusions can be drawn. Firstly, that all three programmes offer a range of skills based on a critical understanding of digital theory and practice which can be transferred to multiple kinds of organization/role. Secondly our record on full employment shows that there is growing demand for these skills, and that those skills are becoming increasingly essential to both the commercial and research sectors.