Unintended consequences: GPS and digital creativity

The story of Global Positioning Systems (GPS), like all phenomenal technological success stories, is one of unintended consequences. GPS is now with us everywhere. It guides our driving and finds us the restaurant nearest to us. It aids mountain rescuers, and it helps ships stay on course. The digital world would be a very different, and less interactive, place without it. A phenomenal success story it certainly is. But its origins are more complex.

Also like most phenomenal tech successes, GPS was not born of any Eureka moment. Rather it came from a hotchpotch of technology, politics, fear and expedience. The launch of the Soviet Sputnik satellite on 4th October 1957 was a point of enormous disruption for the West in the Cold War. Not only had Russia pulled ahead in the space race, it was now clear that it had the rocket technology to strike the US homeland. Western experts sprang in to action. MIT scientists found they could track Sputnik’s location using Doppler Effect principles, whereby sound waves vary in frequency according to the direction and velocity of an object reflecting them (imagine standing by a long straight road, and a car passes you at 80 mph). This in turn gave traction to the methods which would enable Earth-bound receivers to precisely triangulate their positions using satellites. For years this locative capacity was tightly guarded by the US military: under the process of “Selective Availability”, the signal, which allowed receivers to pinpoint their positions, was scrambled except for military users, so that its accuracy was all but useless except for the smallest-scale navigational purposes.

It took another Cold War flash-point to shift this thinking. On September 1st 1987, Soviet air defence mistakenly shot down Korean Airlines flight 007 to Seoul with the loss of 269 lives, thinking it was a hostile aircraft after it strayed into Soviet airspace. This bought the need for real-time geolocation in to sharp focus. In the grief and outrage that followed, then-President Ronald Reagan – amid the blood-curdling threats of retribution flying between East and West – accelerated the process of making GPS available for civilian use. And then on May 1st 2000, as the dust from the falling Berlin Wall was settling, his successor (bar one) Bill Clinton issued an Executive Order ending Selective Availability. Combined with the World Wide Web (invented 11 years earlier), this action helped set the Internet on its path from our desktops to our pockets.

[graph]
The moment Selective Availability ended on the night of 1st-2nd May 2000. Image from https://www.gps.gov/systems/gps/modernization/sa/data/

We are marking this event’s 20th anniversary in DDH with a series of events. Last week, we hosted a quadruple headed set of talks by myself, Cristina Kiminami and Claire Reddleman of DDH, and the GPS artist Jeremy Wood, who presented an overview of how he uses GPS receivers to craft linear sculptures of human motion through the world. In his words, “[W]e are the data. We are the map.” This raises a whole set of practice-led research questions for Digital Humanities: how does GPS help us explore “annotative” approaches to the world, where movement can be captured and imbued with further meaning through the process of associating and linking further information with it, versus “phenomenological” approaches, which stress the subjective lived experience of creating a trace (Cristina’s work); and how an entire place, such as a penal colony can be reproduced (Claire’s work).

Jeremy Wood presenting at “Walking with GPS”

The work we all presented last week represented showcases of how a technology born of the unintended consequences of the struggles, crises and flashpoints of the last century’s Cold War is now open to new ways of exploring relationships between the human/physical and the digital/ephemeral. We are much looking forward to exploring these questions further throughout the 20th anniversary of the end of Selective Availability. Next in this will be a conference on May 1st – the actual anniversary of Clinton’s order – organized by Claire and Mike Duggan of DDH entitled “20 years of seeing with GPS: perspectives and future directions”. This conference will surely have a rich seam of theory and practice to build on.

Digital Humanities: a Department, a Field and an Idea

Introduction: what is DH, and does anyone care?

There is a whole genre of writing out there on the subject of “What is Digital Humanities?”. For some, this is an existential question, fundamental to the basis of research, teaching and the environment of those parts of the academy which exist between computing and the humanities. For others, it is a semantic curiosity, part of an evolution of terminology from “computing in the humanities” to “humanities computing”, finally arriving at “digital humanities” when the instrumentalist implications of the first two no longer encompassed the field of activities described. For others still, it is a relic of 1990s angst over terminology as computing began to permeate the academic environment. Whichever camp one is in, it behoves people, like me, with Digital Humanities in their job title to revisit the question from time to time. This post is an attempt at this, with a particular emphasis on the Department of Digital Humanities (DDH) at King’s College London. The strap-line of the present-day DDH is “critical inquiry with and about the digital”. In what follows, I hope to unpack what I think this means for the field, and for DDH, which has been my institutional home since 2006. Those fourteen years have seen immense changes, both in the Department and in the field of Digital Humanities (hereafter DH) more broadly. Furthermore, tomorrow (1st February) marks six months since I took over as Head of Department of DDH. Therefore, this seems as good a moment as any for a moment of autobiographically driven reflection. I state, of course, the usual disclaimers. Like any healthy academic environment, (D)DH is marked by a diversity of views, a diversity we pride ourselves on embracing and celebrating; and despite being Head of Department, I speak only for myself, in a very personal capacity. Also, any errors of fact or interpretation in what follows are mine and mine alone.

Before I arrived at King’s, I worked for the AHRC’s ICT in Arts and Humanities Initiative at the University of Reading (to the great credit of Reading’s web support services, the AHRC ICT programme’s web pages, complete with the quintessentially 1990s banner I designed, are still available). At the time, I was no doubt suffering a colossal intellectual hangover from my efforts to apply GIS to Bronze Age Aegean volcanic tephrachronology and its archaeological/cultural contexts, and this may have coloured my view of things; but the purpose of this programme was to scope how computing might change the landscape of the humanities, and to funnel public money accordingly. This is the kind of thing that the National Endowment for the Humanities in the US has done, to great acclaim, with its Office for Digital Humanities.

What I was not, at this point, was any kind of Digital Humanist. Working outside the Digital Humanities/Humanities Computing (both appellations have been used over time, but this is another story still), I recall some push-back to the application of digital methods and “e-infrastructure” from some less engaged with technology in their work, who were concerned about reductivism, and the suborning of discursive curiosity to the tyranny of calculation. I particularly recall the debates about GIS in archaeology: GIS, we were told, encouraged over-quantification and processualism, thus stifling discursive, human-centred interpretation of the past. That, at least, is how I remember the landscape when I came to work for the AHRC programme.

Keeping this in mind, let us return to the “what is DH” question. This has become more nuanced and more complex as the “Information Age” has spread and developed over the last thirty years or so. It is well worth remembering that in that in the last 20 years (at least), “the Digital” has impacted on “the Humanities” far beyond the circle of those who self-identify as Digital Humanists in myriad ways (even recognizing that it is a highly permeable circle in the first place). For many “the Digital” was once a convenient method of sending messages supported by university communication networks, which eventually gave way to suites of tools, and associated methods, which provoked new questions about the approach, methodology and even purpose of what we were doing.  Historically, many of these questions were (and are) reflected in the preoccupations of wider society as “the Digital” seeped into the praxes of everyday life. Much of the debate in the bits of academia I inhabited in the early 2000s was couched in terms of if, or how, digital technology would enable research to be done faster, more efficiently and over ever larger distances. There were even questions of computers taking over human roles and functions: perish that thought now. Taking an historical view provides a bigger and contingent picture for this: my own generation was raised in the 1980s on movies such as Terminator, Tron, Lawnmower Man and War Games, scenarios, sometimes dystopian ones, where semi-sentient machines take over the world. I have long argued to my students that it is no coincidence that the rise of “Internetworking”, and the communication protocols that enabled it, including Tim Berners-Lee’s invention of the WWW in 1989,  coincided with a genre of Hollywood movies about computers becoming better at intelligence than humans.

Enter DDH

The series of intellectual processes which led to the field of DH as we know it today thus unfolded against a backdrop of great change in society and culture, driven by computing technology. I come to the shape this gave more specifically to precursors of DH, such as “humanities computing” a little later. But these were, and are, certainly factors which have shaped the development of DH at King’s. Traversing several and various incarnations, “Humanities Computing” at King’s goes back to the early 1970s. There are traces of these times in the fabric of the environment today. If one walks from the present-day Department’s main premises on the 3rd floor of KCL’s Strand Building, down the main second floor corridor of the King’s Building  towards the refectory, in the bookshelves on the left hand side – amid Sir Lawrence Freedman’s library on the history of war and small display of Sir Charles Wheatstone’s scientific instruments, is a collection of volumes and conference proceedings that originate from CCH/DDH’s early activities (picture below).

DDH came in to being, by that name, in 2010. Prior to that, it was known as the “Centre for Computing in the Humanities”. It was established as an academic department in its own right in 2002. Harold Short, formerly director of CCH and the first Head of Department of DDH, wrote that:

The Centre for Computing in the Humanities (CCH) at King’s College London is a teaching department and a research centre in the College’s School of Humanities. Its current status was reached during the course of the 2001-2002 academic year, and may be seen as the natural outcome of a process that began in the 1970s.  ‘Humanities computing’ began at King’s in the early 1970s, with Computing Centre staff assisting humanities academics to generate concordances and create thesaurus listings. The arrival of Roy Wisbey as Professor of German gave the activity a particular boost. Wisbey had started the Centre for Literary and Linguistic Computing while at Cambridge (a Centre which is still in existence, with John Dawson as its Director). In 1973 the inaugural meeting of the Association for Literary and Linguistic Computing (ALLC) was held at King’s, and Wisbey was elected as its first Secretary. Although Wisbey did not feel it necessary to create a specific humanities computing center, he was Vice-Principal of the College in the late 1980s when a series of institutional mergers gave him the chance to propose the formation of a ‘Humanities and Information Management’ group in the restructured Computing Centre.

[Harold Short, July 2002 – reproduced with his permission]

 My central argument is that DDH’s story is one of evolution, development, and even – perhaps contrary to some appearances – continuity. This is an impression driven by the kind of question that we have always asked at King’s. As when I joined the Department fourteen years ago, two far more interesting questions than “what is Digital Humanities” were what is “the digital” for in the humanities; and how has “the digital” changed through time? For the last fifteen years have been a key period for “humanities done with the Digital”. Digital tools allow humanists to interrogate data more deeply, more thoroughly, with greater attention to the nuance between qualitative and quantitative data (which is far less grounded in the humanities than in the social sciences). Those questions are as interesting now as they were then; and they have only become more acute as the wider landscapes of technology, the humanities, and connective research have changed.

To text or not to text

When I joined King’s, perhaps even before, one of the first things I learned about the heritage (with a small “h”) of Humanities computing/Computing in the Humanities/Digital Humanities is that, particularly in the US, it had a long history of engagement with the world of text, and its natural home there, where it had one, lay in University English Departments. Text was certainly a low hanging fruit for the kind of qualitative and quantitative research that computing enabled. Many DH scholars trace the origins of the field to the life and work of Roberto Busa (1913 – 2011), the Jesuit priest whose scholarship in the 1950s on lemmatizing the writings of Thomas Aquinas resulted the Index Thomisticus using punch-card programming, which is widely regarded as the first major application of “Computing in the Humanities”. In the context of 1950s computing, the project was enabled by the epistemological inclination of text to lend itself to calculation: text, as a formal system of recording “data”, which is convertible in to information by the process of sentences and paragraphs, and thence in to knowledge by the process of reading, can be (fairly) unproblematically transferred to punch cards, then the principle form of storing data (this was a vast human undertaking, utilizing the skills of many skilled and unskilled operatives, many of them women, whose stories are now being re-told by the work of Melissa Terras and Julianne Nyhan. Matthew Kirschenbaum takes up this argument:

First, after numeric input, text has been by far the most tractable data type for computers to manipulate. Unlike images, audio, video, and so on, there is a long tradition of text based data processing that was within the capabilities of even some of the earliest computer systems and that has for decades fed research in fields like stylistics, linguistics, and author attribution studies, all heavily associated with English departments.

[Kirschenbaum, Matthew G. “What is digital humanities and what’s it doing in English departments?”, in M. terras, J. Nyhan and E. Vanhoutte (eds.) 2016: Defining Digital Humanities. Routledge: 213.]

CCH did a lot of work on text, but it did many other things besides. Even before I joined, I remember seeing CCH as a dynamic crucible of new thinking, a forum in which classicists, archaeologists, textual scholars, literary researchers, visualization experts, theatre academics, and more, could come together and speak some kind of common language about what they did, and especially how they developed, critiqued and used digital tools. This was a view shared by much of the rest of the world. Most visibly to me, it was recognized by the AHRC’s award of the grant that enabled me to come to King’s, the AHRC ICT Methods Network, at that time the biggest single award the AHRC (and its predecessor, the Arts and Humanities Research Board) had ever made. It was a revelation to me that such a place could even exist. It was certainly nothing like the environment I had known as a lonely GIS/Archaeology PhD researcher, working in a Department full of experts on Plato. It was therefore with great excitement that I joined CCH in January 2006, as a research associate in the Arts and Humanities E-Science Support Centre, which was attached to the Methods Network (what AHeSSC was and what it did is another long story, which is partly told in a much earlier post on this blog). It was just as exciting as an environment to work in as it had looked as an outsider.

Connective research

A delve into the Centre’s public communications at this time show how this collaborative spirit bought it to re-think what the humanities might mean in the Information Age. The Internet Archive’s Wayback Machine [a venerable resource now, having been launched back in 2001, but which is now an indispensable tool for much research on the history of the Web] took an imprint of CCH’s website on Saturday 14th January 2006, two days before my first day at work there. It has this to say about the Centre’s role. This seems to confirm the recollection above, that CCH was primarily an agent of collaborative research:

The Centre for Computing in the Humanities (CCH) is in the School of Humanities at King’s College London. The primary objective of the CCH is to foster awareness, understanding and skill in the scholarly applications of computing. It operates in three main areas: as a department with responsibility for its own academic programme; as a research centre promoting the appropriate application of computing in humanities research; and as a unit providing collegial support to its sister departments in the School of Humanities. As a research centre, CCH is a member of the Humanities Research Centres, the School’s umbrella grouping of its research activities with a specifically inter-disciplinary focus.

Despite the emphasis on collaborative research, one can see from this that CCH was also a place that did, very much, its own original thinking, grounded in the methods and thinking that were driving humanities computing at the time (see above). We can get a flavour of this by looking at the titles of the seminars it ran, which are still there for all to see in Wayback: “Choice in the digital domain – will copyright extend or stifle choice?”; “Adventures in Space and Time: Spatial and Temporal Information in the Digital Humanities”, “From hypermedia Georgian Cities to VR Jazz Age Montmartre: hyperlinks or seamlessness?”, and “The historian as aesthete: One scenario for the future of history and computing”. Some of these titles would not feel at all out of place in the seminar series of the DDH of today. Therefore, while the field, and the Department, have both (of course) changed significantly over the years, this suggests that there are some threads of continuity, as well as evolution, running through: innovation, responses to the challenges and opportunities of the digital, thinking through new approaches to the human record; and indeed what the “human record” is in the digital age. This, certainly, accounts for the first of the areas indicated above, in which CCH emerged as a department responsible for its own academic programme. What, I think, has changed most, is how the department collaborates.

The late 1990/early 2000s were a time of great change and innovation in DH, both technologically and institutionally. A 2011 Ithaka research report noted at the time that

In 2009 the Department of Digital Humanities (DDH), formerly known as the Centre for Computing in the Humanities (CCH), presented the model of a successful cross-disciplinary collective of digital practitioners engaged in teaching and research, with knowledge transfer activities and a significant number of research grants contributing to its ongoing revenue plan.

This highlights the fact that much of the Centre’s activity depended on income from externally funded research projects.  Ground-breaking collaborations with which CCH were involved, and in many cases led, still resonate: Henry III Fine Rolls, in musicology, and the Prosopography of Anglo-Saxon England all produced world-class original research, which contributed to a variety of areas, some directly in the associated humanities domains, some in CCH itself. Interdisciplinary collaboration was the lifeblood of the centre at this time.

It’s all about the method

One thread of continuity is what I would call “methodological emplacement”. That is to say, CCH/DDH has always had an emphasis on what it means to do Digital Humanities, the practice of method as well as the implementation of theory (whose theory is a key question). This, in itself, challenges “conventional” views of the humanities, inherently rooted in the theory and epistemology of particular ways of looking at the world. Among other things, it results in a willingness to deconstruct the significance of the research output – the monograph, the journal article, the chapter in the august collection. Providing a space to think beyond this, to consider the process, the method, and other kinds of output, in itself, lends itself interdisciplinary digital work. Arguably, the three strands to the CCH of the mid-2000s outlined above must, surely, constitute such a space, and is surely still a concern at the core of the present-day DH.

In a forthcoming handbook volume on research methods for the Digital Humanities that we are co-editing myself and my DDH colleague Dr Kristen Schuster develop the idea of methodology drawing from multiple theories:

The fact that each concept illustrates a matter of process rather than output from different perspectives is, we argue, telling of how badly we need to discuss research methods at large instead of research outputs. Considering research as a process, rather than an amorphous mass of activity behind a scholarly output, makes room for identifying crosscurrents in theories, platforms, infrastructures and media used by academics and practitioners – both in and beyond the humanities.

One can hardly expect a field which is concerned with an academic research programme in digital methods and their “appropriate application” in the humanities” not to change, and to expand the theoretical basis that underpins it, as the Information Age galloped on. My personal “year zero”, 2006, was only two years after Facebook was founded (February 2004), and two months before the launch of Twitter (March 2006), a year after Google branched in to mapping, and two years after the first widely-used adoption of the term “Web 2.0”. In our 2017 book, Academic Crowdsourcing in the Humanities: Crowds, Communities and Co-production, my colleague Mark Hedges and I argue that the mid-2000s were a transformative period in networked interactivity online, the time when movements like internet-enabled crowdsourcing have their origins (the word “crowdsourcing” itself was coined in 2006 in Wired magazine): DH was transformed, just like everything else.

Change in DH here, and everywhere else, continued apace. A key moment of the Department’s more recent history was 2015, when the King’s Digital Lab was established, providing an environment for much of the developer and analyst expertise that had previously resided in DDH, and before that CCH. Today the two centres have a closely symbiotic relationship, with KDL establishing a ground-breaking new agenda in the emerging field of Research Software Engineering for the humanities. This is far more than simply a new way of doing software engineering: rather, KDL’s work is providing new critical insights into the social and collaborative processes that underpin excellent DH research and teaching, establishing new ways of building both technology and method. The creation of KDL also underlines the fact that “DH at King’s” is no longer the preserve of a single Department or centre; rather DH is now a field which is receiving investment of time, energy, ideas and, yes, money across the institution.

With and About the Digital: from Busa to Facebook

Just as Busa’s work in the 1950s opened up text to new forms of interrogation by transferring it to the medium of punch cards where it could be automatically concordanced, so these developments open up the human cultural record, including its more recent manifestations,  to new kinds of interrogation and analysis. I do not think this should be a particularly controversial view. After all, my former employer, the AHRC, fully embraced this post-millennial epistemological shift in the humanities very explicitly with its “Beyond Text” strategic initiative, which ran from 2007 to 2012.  This was described as a “a strategic programme to generate new understandings of, and research into, the impact and significance of the way we communicate”, a response to the “increased movement and cross-fertilization between countries and cultures, and the acceleration of global communications”. The reality that the humanities themselves were changing in the face of a newly technological society is writ large.

This truth is not changing. In the eight years since the Beyond Text initiative finished, global communications, and the kinds of digital culture and society they enable have grown more complex, more pervasive and less subject to the control of any individual human authority or agency (with the possible exception of the Silicon Valley multinational giants) and, with the emergence of phenomena such as Fake News, ever more problematic. The digitalization (as opposed to digitization) of culture, and heritage, and politics, and communications – all the things it means to be human – has opened up new arrays of research questions and subjects, just as the digitalization of text did in the twentieth century. To put it another way, the expansion of “the Digital” has given DH the space to evolve. I believe it must embrace this change, while at the same time retaining and enriching the humanities-driven critical groundwork upon which it has always rested. 

Alan Turing himself said that “being digital should be of more interest than being electronic”. And so it has always been at DDH. Digital Humanists have always known this. The present strapline of the Department of Digital Humanities is “critical inquiry with and about the digital”. The prepositions “with” and “about” provide space for a multivocal approach, which includes both the work DDH(/CCH) has excelled at in the past, and that which it does now. Crucially in my view this enables them to learn from one another. Critical research with the digital” is, I would argue, exactly what Busa did, it is exactly what the English Laws, Prosopography and Fine Rolls project are.  At the same time, “critical research about the digital” recognises the reality that “the digital” itself has become a subject of research – the elements of society and culture (increasingly all of these, at least in the West) which is mediated by digital technology and environment.  As I finish my first six months as Head and look to the next, I want to see the Department continue to be a space which enables the co-equality of “with” and “about”.

Red Posts

For a number of years, I’ve driven from Berkshire (and before that, was driven as a child from London) to Dorset, south west England, to visit various family members. An intriguing feature of the drive is the “Red Post”, at Bloxworth, near Wareham. It is an otherwise ordinary rural finger-post but painted red, with the placenames picked out in white lettering.

There are other such posts dotted around Dorset and the south west of England (possibly elsewhere as well), including one which, when I went that way last year, had been vandalized, on the road between Beaminster and Evershot (below).

One theory I have head is that they mark the sites of gibbets/gallows, although I haven’t been able to find any documentary verification of this. Also, it seems a bit unlikely and impractical to bring felons to the middle of nowhere to execute them.

Some are marked with the legend “Red Post” on the nineteenth century OS six-inch series maps, but some (including the Bloxworth post above) are not. I’ve come across a general public Act of Parliament of 1753 consisting of

A BILL for Repairing, Amending, and Widening the ſeveral roads leading from the Red-Poſt, in the Pariſh of Fivehead, where the Taunton Turnpike ends, through the Pariſh of Curry Rivell, the Towns of Langport and Somerton, to Butwell … etc

This suggests some considerable antiquity to the phenomenon; and that in the eighteenth century, Red Posts were well known and visible as waymarkers. So I am digging a bit more. I have a few ideas, and I hope that 2020 will bring a slightly more formal publication on the matter, on one platform or another.

This is a plea, in the meantime, that if anyone familiar with the English countryside knows anything about its Red Posts/Poſts , then I’d love to hear from you…

A History of Place 3: Dead Trees and Digital Content

The stated aim of this series of posts is to reflect on what it means to write a book in the Digital Humanities. This is not a subject one can address without discussing how digital content and paper publication can work together. I need to say at the outset that A History of Place does not have any digital content per se. Therefore, what follows is a more general reflection of what seems to be going on at the moment, perhaps framing what I’d like to do for my next book.

It is hardly a secret that the world of academic publication is not particularly well set up for the publication of digital research data. Of course the “prevailing wind” in these waters is the need for high-quality publications to secure scholarly reputation, and with it the keys to the kingdom of job security, tenure and promotion. As long as DH happens in universities, the need to publish in order to be tenured and promoted is not going to go away  There is also the symbiotically related need to satisfy the metrics imposed by governments and funding agencies. In the UK for example, the upcoming Research Excellence Framework exercise explicitly sets out to encourage (ethically grounded) Open Access publication, but this does nothing to problematize the distinction, which is particularly acute in DH, between peer-reviewed research outputs (which can be digital or analogue) and research data, which is perforce digital only. Yet research data publication is a fundamental intellectual requirement for many DH projects and practitioners. There is therefore a paradox of sorts, a set of shifting and, at times, conflicting motivations and considerations, which those contemplating such are faced with.

It seems to be that journals and publishers are responding to this paradox in two ways. The first facilitates the publication of traditional articles online, albeit short ones, which draw on research datasets which are deposited elsewhere, and to require certain minimum standards of preservation, access and longevity. Ubiquity Press’s Journal of Open Archaeological Data, as the name suggests, follows this model. It describes its practice thus:

JOAD publishes data papers, which do not contain research results but rather a concise description of a dataset, and where to find it. Papers will only be accepted for datasets that authors agree to make freely available in a public repository. This means that they have been deposited in a data repository under an open licence (such as a Creative Commons Zero licence), and are therefore freely available to anyone with an internet connection, anywhere in the world.

In order to be accepted, the “data paper” must reference a dataset which has been accepted for accession in one of 11 “recommended repositories”, including, for example, the Archaeology Data Service and Open Context. It recommends that more conventional research papers then reference the data paper.

The second response is more monolithic, where a publisher takes on both the data produced by or for the publication, and hosts/mounts it online. One early adopter of this model is Stanford University Press’s digital scholarship project, which seeks to

[A]dvance a publishing process that helps authors develop their concept (in both content and form) and reach their market effectively to confer the same level of academic credibility on digital projects as print books receive.

In 2014, when I spent a period at Stanford’s Center for Electronic and Spatial Text Analysis, I was privileged to meet Nicolas Bauch, who was working on SUP’s first project of this type, Enchanting the Desert. This wonderful publication presents and discusses the photographic archive of Henry Peabody, who visited the Grand Canyon in 1879, and produced a series of landscape photographs. Bauch’s work enriches the presentation and context of these photographs by showing them alongside viewsheds of the Grand Canyon from the points where they were taken, this providing a landscape-level picture of what Peabody himself would have perceived.

However, to meet the mission SUP sets out in the passage quoted above requires significant resources, effort and institutional commitment over the longer term. It also depends on the preservation not only of the data (which JOAD does by linking to trusted repositories), but also the software which keeps the data accessible and usable. This in turn presents the problem encapsulated rather nicely in the observation that data ages like a fine wine, whereas software applications age like fish (much as I wish I could claim to be the source of this comparison, I’m afraid I can’t). This is also the case where a book (or thesis) produces data which in turn depends on a specialized third-party application. A good example of this would be 3D visualization files that need Unity or Blender, or GIS shapefiles which need ESRI plugins. These data will only be useful as long as those applications are supported.

My advice therefore to anyone contemplating such a publication, which potentially includes advice to my future self, is to go for pragmatism. Bearing in mind the truism about wine and fish, and software dependency, it probably makes sense to pare down the functional aspect on any digital output, and focus on the representational, i.e. the data itself. Ideally, I think one would go down the JOAD route, and have one’s data and deposit one’s data in a trusted repository, which has the professional skills and resources to keep the data available. Or, if you are lucky enough to work for an enlightened and forward-thinking Higher Education Institution, a better option still would be to have its IT infrastructure services accession, publish and maintain your data, so that it can be cross-referred with your paper book which, in a wonderfully “circle of life” sort of way, will contribute to the HEI’s own academic standing and reputation.

One absolutely key piece of advice – probably one of the few aspects of this, in fact, that anyone involved in such a process would agree on – is that any Universal Resource Indicators you use must be reliably persistent. This was the approach we adopted in the Heritage Gazetteer of Cyprus project, one of whose main aims was to provide a structure for URI references to toponyms that was both consistent and persistent, and thus citable – as my colleague Tassos Pappacostas demonstrated in his online Inventory of Byzantine Churches on Cyprus, published alongside the HGC precisely to demonstrate the utility of persistent URIs for referencing. As I argue in Chapter 7 of A History of Place in fact, developing resources which promote the “citability” of place, and link the flexibility of spatial web annotations with the academic authority of formal gazetteer and library structures is one of the key challenges for the spatial humanities itself.

I do feel that one further piece of advice needs a mention, especially when citing web pages rather than data. Ensure the page is archived using the Internet Archive’s Wayback Machine, then cite the Wayback link, as advocated earlier this year here:

This is very sound advice, as this will ensure persistence even the website itself depreciates.

Returning to the publication of data alongside a print publication however: the minimum one can do is simply purchase a domain name and publish the data oneself, alongside the book. This greatly reduces the risk of obsolescence, keeps you in control, and recognizes the fact that books start to date the moment they are published by their very nature.

All these approaches require a certain amount of critical reduction of the idea that publishing a book is a railway buffer which marks the conclusion of a major part of one’s career. Remember – especially if you are early career –  that this will not be the last thing you ever publish, digitally or otherwise. Until those bells and whistles hybrid digital/paper publishing model arrive, it’s necessary to remember that there are all sorts of ways data can be preserved, sustained and form a valuable part of a “traditional” monograph. The main thing for your own monograph is to find the one that fits, and it may be that you have to face down the norms and expectations of the traditional academic monograph, and settle for something that works, as opposed to something that is perfect.

New posts at DDH

 

King’s College London is recruiting Lecturers and a Senior Lecturer in the Department of Digital Humanities. Lecturers are the UK equivalent of Assistant Professors and Senior Lecturers correspond to Associate Professors in the US system.

King’s College London is in the fourth year of making a significant investment in the Department of Digital Humanities as part of an ambitious programme of growth and expansion in existing and emergent research areas and student numbers across its five Master programmes and the BA Digital Culture.

King’s College London has a long tradition of research in the Digital Humanities, going back to the early 1990s. King’s is one of the few places in the world where students at all levels can pursue a wide range of inter-disciplinary study of the digital (https://www.kcl.ac.uk/ddh/about/about).

We are seeking to recruit exceptional candidates to join the Department no later than 1st September, who can enthuse and inspire our students, conduct world-leading research, and contribute to the life and reputation of the Department through academic leadership and public engagement.

We are hiring for

Closing date: 28 April 2019

Lecturer candidates will be on their way to becoming scholars of international standing with a research and publication trajectory that illustrates this ambition. They will contribute to the further development of the Department¹s research strengths, provide high-quality teaching and supervision, and work collaboratively within the Department and beyond.

Senior Lecturer candidates will be scholars of international standing with a strong research and publication record and evidence of or potential for research income generation. The successful applicants will play a key role in leading work across the Department to enhance our research strengths, to develop new and emergent research areas, to innovate in teaching practice and pedagogy, and to contribute to our underpinning values of co-research and collaboration.

A History of Place 2: Indexing

I opted to compile the index of A History of Place myself. I made this choice for various reasons, but the main one was that the index seemed to me to be an important part of the volume’s framing and presentation. Reflecting on this, it seems a little ironic, as in some ways a book’s index exemplifies the age of the pre-digital publication. Using someone’s pre-decided terms to navigate a text is antithetical to the expectations and practices of our Googleized society. Let’s face it, no one reading the e-version of A History of Place is ever going to use the index, and in some ways compiling the index manually, reviewing the manuscript and linking key words with numbers which would, in due course, correspond with dead-tree pages felt almost like a subversive act.

But like an expertly curated library catalogue, an expertly compiled index is an articulation of a work’s structure and requires a set of decisions that are more complex than they may at first seem. These must consider the expectations and needs of your readers, and at the same time reflect, as accurately as possible, the current terminologies of your field. The process of indexing made me realize it gives one a chance (forces one in fact) to reflect – albeit in a bit of a hurry – on the key categories, terminology and labels that oneself and one’s peers use to describe what they do. It thus forces one to think about what terms mean, and which are important – both to one’s own work, and to the community more broadly (some of whom might even read the book).

There is also the importance of having a reliable structure. As I outline in the book itself, and have written elsewhere in relation to crowdsourcing, some have argued that using collaborative (or crowdsourced) methods to tag library catalogues for the purposes of searching and information retrieval disconnects scholarly communities from the ‘gatekeepers of the cultural record’, which undermines the very idea of the academic source itself (Cole & Hackett, 2010: 112–23) [1]. Cole and Hackett go on to highlight the distinction between “search” and “research”; whereby the former offers a flat and acrticial way into a resource (or collection of resources) based on user-defined keywords, whereas the latter offers a curated and grounded “map” of the resource. While, in this context, Cole and Hackett were talking about library catalogues, exactly the same principle applies book indices.

I don’t wish to overthink what remains, after all, a rather unglamorous part of the writing process; however even in the digital age, the index continues to matter. Even so, there is no shame at all in busy academics (or any other writers) delegating the task of compiling an index to a student or contract worker, provided of course that person is fully and properly paid for their efforts, and not exploited.  But I think it is necessary to have a conversation with that person about strategy and decision making. What follows is some examples from A History of Place which exemplify issues which authors might wish to consider when approaching their index, and/or discussing with their indexer.  By discussing these examples I try to explore the decisions I made about which terms and sub-terms I decided to include, and why.

To begin with the practicalities, the wise advice provided by Routledge was:

You don’t have to wait for the numbered page proofs of your book to arrive – start to think about entries when you have completed the final draft of your typescript. The index is always the last part of the book to be put together and submission of your final copy will be subject to a tight deadline. Preparing it now may save you time later on. [emphasis added]

I would suggest that it is a good idea to think about these even before the numbered proofs turn up.

And then

On receipt of [numbered proofs], you should return to your already-prepared list of words. Use the numbered proofs to go through your book chapter by chapter and insert the page numbers against each entry on your list. (You can use the ‘Find’ function to locate words within the proof PDF.)

The gap between compiling your original list and adding page numbers will help you to evaluate your designated entries once more. Have you missed anything obvious? Are your cross references accurate and relevant? Revisit the questions under the heading ‘Choice of Entry’.

When you are satisfied that your index is complete, put it into alphabetical order.

You will come to love proper names in the early stages of this process. For example there is only one way you can represent Abraham Ortelius, or Tim Berners-Lee in your index; and no decisions involved in how to define the page limits for the references to them.

However, the process of selecting abstract terms for inclusion is more challenging. There were arguments both for and against including the word “Bias”, for example.  All maps are biased of course, and in theory this could have applied to most of the examples I discuss. However, it forms an important topic of much recent literature on neogeography (for example), which address the ways in which neogeographic platforms perpetuate social bias due to their demographies (mostly white, male, Western etc). Therefore, inclusion made sense as it referenced explicit discussion of bias in secondary literature (mostly in the chapter on neogeography). It was possible to connect this to “collective bias” via the cross-referencing option of “see also”, of which Routledge advises:

  • See

If the entry is purely a cross reference, the entry is followed by a single space, the word ‘see’ in italics and the cross reference. For example:

sensitivity see tolerances

Note that under the entry for ‘tolerances’ there is no cross reference back to ‘sensitivity’. Page numbers should not be stated where ‘see’ is used.

  • See also

This should be used to direct the reader to additional related information.

This is a useful distinction, because it forces one to consider whether terms are synonymous versus relevant.  “Bias” and “collective bias” is a good example as the original term is somewhat fluid and required some pre-hoc consideration but is clearly different from “collective bias”.

Highly specific and specialized terms presented less of a problem.  Chorography, for example, features prominently in my index, but it could potentially have had any number of “see also…” references. However, given it is such a specialized term, I made a pragmatic decision (based partly on what I thought a reader using the index would need/want) to have it simply standalone, with no cross-references at all.

The most challenging terms were the big, important ones with multiple potential meanings. “GIS” is probably the most obvious example for A History of Place. Most of my arguments touch in some way on how spatial thinking in the humanities has emerged from, and been shaped by, GIS and related technologies, so the challenge was to divide the term up in to subsections which are a) useful for a potential reader, and b) reflective of disciplinary practices. My strategy was to treat branches of GIS which have been explicitly recognized and differentiated in the literature – such as Critical GIS; Qualitative GIS; Participatory GIS Historical GIS and Literary GIS – as separate index terms, linked as “see also” references. These are then tied only to specific occurrences of that term in each case. For discussions of GIS not explicitly relating to those terms, I used “and…” references which were tied to my chapter themes. This enabled me to divide the myriad references to GIS into sections which accord logically with the book’s structure – “- and archaeology” “- and and spatial analysis”, “-and text”, “-and crowdsourcing” and so on.

“Neogeography” created similar problems, but this type of term is compounded when the field moves so quickly. A recent paper by Linda See and others illustrates just how difficult this term is to pin down. I think all I can draw from this is that such index terms will need some considerable revisiting in the event of there being any future editions(!).

So, the agenda for that initial conversation with your indexer should, I would suggest, include:

  • Strategies for dealing with abstract terms, and deciding which are relevant and which are not
  • Highlight important, wide ranging terms, and what sub-categories you think they should have
  • How to identify specific terms which may or may not need “see also” references
  • Which sort of circumstances demand you to signpost between related terms using the “see” option.
  • Flag terms – for your won reference if nothing else – that may not be easily “future proofed”.

 

[1] Cole, R., & Hackett, C. (2010). Search vs. Research: Full-text repositories, granularity and the concept of “source” in the digital environment. In C. Avery & M. Holmlund (Eds.), Better off forgetting? Essays on archives, public policy and collective memory (pp. 112–123). Toronto.

A History of Place 1: Authorship and Teaching

This is the first of a series of posts to follow the recent publication of my book, A History of Place in the Digital Age. My aim with these is to look at various topics on the theme of what it means to write a book in the Digital Humanities (DH) as a means of reflecting on what I got out of the process as (I guess) a “Digital Humanist”, partly to capture such reflections for posterity (whether or not posterity has any interest in them), and also in the hope that they might be of some sort of value to others considering such a course. Here, I look at the links between writing a book at teaching, in both the undergraduate and postgraduate taught classrooms.

A prosaic problem is, of course, accessibility. Even the most committed student would balk at the cover price of a History of Place, and inevitably it takes time for an institutional library to place orders. It is therefore worth exploring in detail what one can and cannot do to make your work available under the terms of one’s contract; and, where possible, expediting library acquisitions by (for example) encouraging purchase of the e-edition. I don’t think I have any searing insights to offer on this subject, rather I see it as part of a much broader and more complex set of issues on Open Access in Higher Education which, I am sure, others are far more qualified to comment on than me.

It’s a tenet of major research universities such as my own, that our teaching should strive to be “research led” (see e.g. Schapper, J. and Mayson, S.E., 2010. Research‐led teaching: Moving from a fractured engagement to a marriage of convenience. Higher Education Research & Development, 29(6), pp.641-651). Most, I guess, would interpret this as meaning that the content delivered in the classroom is sourced from one’s own original research, in the context of one’s Department’s research profile an strands, conveyed via various pedagogical tools and techniques. Most of the latter are based on conventional scholarly publications, notably journals and, of course books. In the future, I would like to consider particularly the implications of writing a book – and thus complicity, for better or for worse, in that environment – for student assessment, and the kind of culture the act of authorship encourages. In this, one needs to bear in mind that the traditional essay is perhaps not best suited to the delivery of all kinds of DH content. To transcend this truth (which I think most DHers would accept) I plan to use some of the cases discussed in my book – for example mapping references to Cypriot places in texts, (the subject of much of Chapter 7) – to encourage students to test and challenge observations I discuss on the construction of imperial identities, for example by using practically the Heritage Gazetteer of Cyprus. Ideally, I would like to accompany this with screencast videos to accompany the institutional lecture capture (the desirability and ethics of this are a story for another day); something I have already started to do to assist students with the Quantum GIS exercise they learn.

The text of my book is structured around a postgraduate module I have taught for three years now, Maps Apps and the GeoWeb: An Introduction to the Spatial Humanities. I think it is fair to say that there has been a great deal of cross-fertilization between the two activities, certainly at the design stage. The chapter structure of the book partly, but not entirely, reflects the weekly lecture programme for Maps and Apps, a conscious decision I made when drawing up the book proposal, figuring that there could be economies of scale in terms of the effort involved in both tasks.

Writing chapters which correspond to classes was a great opportunity – and impetus – to update my own knowledge and scope of understanding of those topics, and numerous new insights have found their way in to the lectures this year as a result. This has also given me a richer context to include, where appropriate, examples from contemporary life, something I feel greatly helps students engage with complex concepts, relating them to their own experience. For example, my class on neogeography, delivered towards the end of the Semester, was updated this year to include a discussion of commercial appropriation of passive neogeographic material by transport service companies, something which I explore at a theoretical level in Chapter 5 in A History of Place; but which (I think) also points students towards much broader contemporary issues which they see in the news, such as the appropriation of user data by platforms such as Facebook. Teaching this class, I have found that students react creatively and imaginatively to a task where they are asked to find actual instances, based on their own local knowledge and spatial experience, of omissions of features in OpenStreetMap, of the kind described by Monica Stephens in her 2013 paper Gender and the GeoWeb: divisions in the production of user-generated cartographic data; an important text both in this class’s reading list and in Chapter 5.

Another thing I found interesting to reflect on is that other chapters, somewhat counter-intuitively, emerge from reading, research and conversations I conducted during the process of setting up other modules I teach, in which I had relatively little previous grounding, and which were seemingly of limited relevance to the topic in hand. Chapter 2, for example, seeks to place the spatial humanities in the context of its long history. This includes the emergence of the Internet and WWW in the twentieth century, and a discussion on how these impacted human perceptions and experience of space and place. As one might expect, I draw heavily on Doreen Massey’s A Global Sense of Place in this section (a core reading for Week 1 of Maps and Apps, as it happens); but much of the literature I use to do so is drawn from my undergraduate History of Network Technology module. This was a course I began teaching in 2015/6, on quite a steep learning curve (if that sounds like a euphemism, that’s because it is). But a couple of years’ reflection on, for example, the work of J. C. R. Licklider, Paul Baran and Vanevar Bush – readings I expanded and consolidated for the book – bought me to realize that one can’t really understand the methodologies which underlie the spatial humanities without reference to the implications of the mid-twentieth century’s struggles with post war information deluge; for example Licklider’s vision, expressed in Man Computer Symbiosis (1960), of

[A] network of “thinking centres”, connected to one another by wide-band communication lines and to individual users by leased-wire services. In such a system, the speed of the computers would be balanced, and the cost of the gigantic memories and the sophisticated programs would be divided by the number of users.

In my view, this expresses a “de-spatialization” of human knowledge that directly foresees the kind of interactions and data transactions now familiar to users of OpenStreetMap, Google Maps, and indeed any other kind of geo platform.   This seems to me to be fundamental to the epistemology of the spatial humanities, helping to explain the emergence of the GeoWeb in the broader context of the information age. I therefore have to admit – with a little trepidation – that as well as my teaching being research-led, my research is, to an extent, teaching-led. This is of course before I factor in discussions with the students themselves, in tutorials, seminars and questions after lectures (which do happen occasionally); and more than once my mind has been changed on a particular issue by an excellent student paper.

Of course, it depends on what kind of book you’re writing. A History of Place is quite a broad-brush consideration of the history and impact of GIS, and related technologies on the humanities, which is analogous to the module design and learning outcomes of Maps and Apps. It is therefore logical to expect a rough, although not necessarily comprehensive (see above) correlation between the class topics and chapters. It would of course be different in the case of a highly specialized book dealing with a particular topic in depth. But even then, I feel sure there would be similar crossovers even in such cases.

So, to summarize: Obviously we must aim to pass on our original insights as scholars to students. However, it is also very well worth considering, in as detached a manner as possible, what new insights your teaching might contribute to your book. This, in turn, will help you to strengthen and improve our teaching, and the curricula of your courses – especially when (as many Digital Humanists do) you teach courses across disparate subject areas. A careful mapping of course reading lists to your own bibliography can be very helpful for the same reason.