What versus How: teaching Digital Humanities after COVID-19

It will take a very long time for us to fully understand the long-term impact of the current COVID-19 crisis, and all the horrors it has bought to the world. By “us” I mean Higher Education, but of course this applies globally. Last month, in the space of a week many universities (including of course my own)  underwent the kinds of changes that would normally take five years or more to effect; and it is unclear when any kind of “normality”, as visible in the familiar processes of face to face Higher Education, will return. Given the great dependence of the global HE sector on academic and student mobility, and (some argue), the generally disorganized nature of many Western governments’ initial responses to suppressing the outbreak, some predictions estimate that it may be March 2021, or even later in Western Europe, before such normality can resume.

As the next academic year approaches – and its potential timing is discussed – we need to consider online teaching as a matter of resilience. After all proto-Internet itself emerged in the 1960s and 1970s partly as a response to the shadow of Cold War, providing a means of channelling executive command decisions through “distributed networks” which could survive nuclear attack. Given that COVID-19 and/or other pandemics may well recur, we have responsibility to our students, and each other, to consider how we might weather such storms in the future.  

More importantly though, it is a matter of pedagogy. One thing to say at the start, which is extremely obvious within the DH community, but which still perhaps needs re-stating, is that moving teaching normally done face to face online at a time of emergency is not the same thing as online pedagogy, never mind good online pedagogy. No one – academics, students, management – should expect it to be. Once this fundamental truth is acknowledged, there opens up a range of important and self-reflective questions that DH as a field needs to ask about what good online pedagogy is. This post attempts to pose – if not answer – some of these questions. 

 Most importantly, the COVID-19 crisis throws into relief the distinction between what we teach online (in DH, and everywhere else) and how we teach it. Flurries of discussion about the how of online education – the relative merits of Skype, MS Teams, Zoom, institutional VLE platforms – have proliferated. Against the background of crisis, our “how” has changed (literally) overnight, driven by the need to deliver what we had already promised to our students. Despite this, the creativity and innovation of DH has been much in evidence. It comes through in the always-excellent Digital Humanities Now’s roundup of COVID-19 think-pieces and other contributions here.  I have seen stories of many colleagues in DH who have risen magnificently to the challenges involved (these abound in my own Department), and I have been truly inspired by the stories they have told me of compromise, improvisation, imagination, and the challenges of the digitalization of content and delivery (these are exactly the stories that I have also seen echoing all around academic trade and social media – we are most certainly, to employ another over-used phrase all in this together). 

However, in the longer term the question of what we teach online, and how this differs from in-person degree programmes will need to be addressed. What kinds of learning can best be imparted remotely? Thinking of this in terms of what, as well as how, allows us to think of online teaching in terms of its opportunities, and not just as a palliative for the pain that recovering from COVID-19 will cause us all (which we will have to address in other ways – that is another story entirely). This, I think, is really important. It will also take time, resources, effort and imagination beyond the teaching we already do, and the efforts that we have all made to salvage our existing teaching tasks.

We can begin by asking if it is even possible to deliver the same learning outcomes from our homes as we do from the lectern. Should we even try? If not, what should we be doing instead? These are fundamental questions that have been bubbling under the surface of DH pedagogy for years.  Many current debates in the newer forms of DH embrace “the digital” as its own theoretical construct. They argue that “the digital” has its own modes of production and interpretation that are separate from (for example) printed materials or physical image media (this idea permeates much our teaching and research in DDH at King’s, and one of our core aim remains to build and contribute to the global body of that theory, as driven by the humanities).  It follows that “digital methods” should be seen as a body of methodology distinct from other types of method, particularly the discursive means used by humanities researchers to reach and understand the human record. If this is true, then we will have to accept that delivering “the digital” and “digital methods” online to students means that the fundamental building block of HE programmes, the learning outcome, will have to be re-thought for online delivery. What are learning outcomes even for in the digital age, when students are, as part of their everyday lives, connected with networks of knowledge, information, ways of doing things, cultures and economies that have only ever been “digital”?

Learning outcomes, defined as the skills and knowledge that a student has on completing a course that they did not have before, are inevitably tied to the types of material we teach. In the kind of humanities-driven learning of and about “the digital” that we pursue in DDH, the origins of such material may lie in the physical world (such as manuscripts, artworks, photographs etc) or the digital world (content created purely online). For reasons set out in more detail below, I believe that online teaching, in particular, gives us incredible opportunities to question this distinction in new ways, for all kinds of material in the Digital Humanities.

We can tease out these opportunities by taking an historic perspective; by looking back at assumptions which were common in the pre-digital world.  In World Brain (1938), H. G. Wells predicted that in the future

[a]ny student, in any part of the world, will be able to sit with his projector in his own study at his or her convenience to examine any book, any document, in an exact replica.

This view of the world runs gloriously roughshod over any idea that “the digital” actually changes our interpretive relationship with our material in any way, rather it asserts that an “an exact replica” can be easily delivered to any student, anywhere. The medium will never be the message in such a world: rather it is value-free, lacking in any phenomenological significance, and contributing nothing to the interpretive process. If our studies of Digital Culture and its related fields have taught us anything, it is that this is manifestly not true. Of course digital transmission changes our perception and reception of cultural material. Try writing a tweet with a fountain pen and posting it through the mail, or opening a text file in MS Word 95. The digital is a prism through which we see and experience the human record past and present, not a window. Online teaching needs to embrace this, and this is very much a matter of “what”, as well as “how”.

Therefore, the challenge for DH pedagogical theory and practice as it approaches both the how and the what of online learning is to construct new forms of learning outcome which enable students to embrace that prism: the teaching of digital methods, digital citizenship and digital ways of being, rather than just digital content not, as per Wells, as simply an exactly replica of what we get in the library or the archive. There is much one could draw on from other DH discourses: for example, much is made in library and archive studies of the truth that preservation (e.g. the creation of exact replicas of content) is not sustainability (which is the ability to use those replicas in some way).  I make no claim that this is a new idea in DH pedagogy (it is certainly very present in DDH), but it shows that DH has many rich and deep seams to draw in in understanding the key “how” versus “what” difference for online teaching (and research).

There follows some areas which I think we need to consider when building learning outcomes for online courses. I do not purport to offer any answers here, these are merely very initial and high-level ideas to act as way-markers to help kick off conversations that many of us will be having over the next months and, probably years. No doubt they will be changed, deleted, re-organized, re-ordered and added to, but for teaching which approaches “the digital” in a humanities driven way – which, for me is the essence of DH pedagogy – then these represent the starting points as I see them.

Participation and placemaking

Teaching is not something that happens only in the classroom or instructor’s office. As I point out in one of the early lectures of my Maps, Apps and the GeoWeb module, the classroom or lecture hall is a “place” that we all contribute to by the medium of our presence. It is more than walls, floor and a ceiling; its function channels Heidegger’s Being of Dasein: of physical presence. Place is a human construct which we create collectively and socially through processes of actually being there and, as in the world outside the academy, this has been disrupted by the digital.

In DH we have – slowly – learned to teach and develop bodies of theory with our students in the framework of “traditional” face to face teaching in the classroom and the lecture hall. Consequently the act of speaking in, and to, a group in the same physical location is a staple of the traditional seminar. However, for many of our students, physical place has already been collapsed. The channels of Instagram, Twitter, Snapchat etc may connect to the physical world through geolocation, but they “exist” aspatially. We must find ways to enable synchronous contribution to online discourse which encourages critical reflection of the nature of that “place”, that meaningfully separates formal educational channels from social ones.

Embracing asynchronous conversation

Closely related to the need to embrace asynchronicity, especially given that the seminar – small-group teaching of students co-located in time and place – is one of the key planks of humanities pedagogy. Like many of my colleagues, I make use of group work in seminars to maintain focus, but also to ensure that students who may be more introverted, and thus less inclined to contribute to a larger group discussion (despite the value of any contributions they have to make) feel able to contribute. We will need mechanisms online which facilitate such inclusivity online, and which not only facilitate one to many conversations, but also many to many discussions. These in turn need to respect, and work alongside, and not impinge on, students’ existing many to many digital lives.    

New kinds of assessment

The essay is as much an artefact of conventional teaching in the humanities as the seminar; however the limitations of the essay format for assessing who students have learned and how well they have learned it in DH have long been apparent. Whilst they will also have a role in assessing discursive understanding of the core modules, however there is a general assumption across the arts and humanities that assessment will always be by essay, unless there is a reason for it to be otherwise. In DH, I would suggest the opposite should be true, especially for online teaching:  essay-based assessment should have to be justified by the impracticability of shorter, practice-focused evaluation. For example, one of the learning outcomes of my own optional module is:

[The student should] be able to demonstrate knowledge of fundamental web standards for geospatial data, with a primary focus on KML, but with a broader appreciation of how these standards relate to generic frameworks, including most importantly the World Geodectic Data system. They will also be able to discuss the limitations these impose on the expression of information in the digital humanities, and discourses built around it.

Currently my assumption is that this outcome will be assessed discursively by a 4000 word essay, structured across 4-6 examples, or 4-6 arguments focused on a single case study. There is no reason at all why this assessment could not be broken down instead into 4-6 web-mounted exercises based on real-world problems based on humanities materials (in the best world of all, students could be given a list of 10+ mini-cases to select from, and then explain the methodological link between them). I think this would, in any case, get them much closer to the technical core of the problem described.

The importance of Open Access and Open Data

The COVID-19 crisis has prompted many publishers and content providers to make materials related to coronavirus research that would otherwise have been paid for freely available, for example Cambridge University Press , Wiley and Taylor and Francis.  This is excellent news for sure, but we need to capture this opportunity to think in more detail about the place of Open Access and Open Data in our research and teaching.  

In theory of course, online teaching can continue to be done behind institutional VPNs, subscriptions to Shibboleth and Athens, and to publishers; although some such resources are not available to students accessing content from certain regulatory regimes, which is another key factor. A move to online teaching must include promoting critical assessment of reflection on open data and open resources; to extend the principle of encouraging students to explore further reading in trusted environments (i.e. libraries) to the “wild west” of the WWW. Teaching that happens in the online “place” (see first point) must include methodological skill-building in how the features of that “place” – datasets, articles inside and outside peer-review, formal and informal research outputs, content produced by other students function, and how they can best be evaluated and navigated.

To conclude: the what and the how of online teaching are the axes on which all these considerations need to be plotted. Reconciling them will require resources, imaginative thinking, a range of theories, ideas and resources that DH has been experimenting with already for years and, above all, skillful and creative people to put them in to practice.  In all these things, I think DH has a good start.

Unintended consequences: GPS and digital creativity

The story of Global Positioning Systems (GPS), like all phenomenal technological success stories, is one of unintended consequences. GPS is now with us everywhere. It guides our driving and finds us the restaurant nearest to us. It aids mountain rescuers, and it helps ships stay on course. The digital world would be a very different, and less interactive, place without it. A phenomenal success story it certainly is. But its origins are more complex.

Also like most phenomenal tech successes, GPS was not born of any Eureka moment. Rather it came from a hotchpotch of technology, politics, fear and expedience. The launch of the Soviet Sputnik satellite on 4th October 1957 was a point of enormous disruption for the West in the Cold War. Not only had Russia pulled ahead in the space race, it was now clear that it had the rocket technology to strike the US homeland. Western experts sprang in to action. MIT scientists found they could track Sputnik’s location using Doppler Effect principles, whereby sound waves vary in frequency according to the direction and velocity of an object reflecting them (imagine standing by a long straight road, and a car passes you at 80 mph). This in turn gave traction to the methods which would enable Earth-bound receivers to precisely triangulate their positions using satellites. For years this locative capacity was tightly guarded by the US military: under the process of “Selective Availability”, the signal, which allowed receivers to pinpoint their positions, was scrambled except for military users, so that its accuracy was all but useless except for the smallest-scale navigational purposes.

It took another Cold War flash-point to shift this thinking. On September 1st 1987, Soviet air defence mistakenly shot down Korean Airlines flight 007 to Seoul with the loss of 269 lives, thinking it was a hostile aircraft after it strayed into Soviet airspace. This bought the need for real-time geolocation in to sharp focus. In the grief and outrage that followed, then-President Ronald Reagan – amid the blood-curdling threats of retribution flying between East and West – accelerated the process of making GPS available for civilian use. And then on May 1st 2000, as the dust from the falling Berlin Wall was settling, his successor (bar one) Bill Clinton issued an Executive Order ending Selective Availability. Combined with the World Wide Web (invented 11 years earlier), this action helped set the Internet on its path from our desktops to our pockets.

[graph]
The moment Selective Availability ended on the night of 1st-2nd May 2000. Image from https://www.gps.gov/systems/gps/modernization/sa/data/

We are marking this event’s 20th anniversary in DDH with a series of events. Last week, we hosted a quadruple headed set of talks by myself, Cristina Kiminami and Claire Reddleman of DDH, and the GPS artist Jeremy Wood, who presented an overview of how he uses GPS receivers to craft linear sculptures of human motion through the world. In his words, “[W]e are the data. We are the map.” This raises a whole set of practice-led research questions for Digital Humanities: how does GPS help us explore “annotative” approaches to the world, where movement can be captured and imbued with further meaning through the process of associating and linking further information with it, versus “phenomenological” approaches, which stress the subjective lived experience of creating a trace (Cristina’s work); and how an entire place, such as a penal colony can be reproduced (Claire’s work).

Jeremy Wood presenting at “Walking with GPS”

The work we all presented last week represented showcases of how a technology born of the unintended consequences of the struggles, crises and flashpoints of the last century’s Cold War is now open to new ways of exploring relationships between the human/physical and the digital/ephemeral. We are much looking forward to exploring these questions further throughout the 20th anniversary of the end of Selective Availability. Next in this will be a conference on May 1st – the actual anniversary of Clinton’s order – organized by Claire and Mike Duggan of DDH entitled “20 years of seeing with GPS: perspectives and future directions”. This conference will surely have a rich seam of theory and practice to build on.

Digital Humanities: a Department, a Field and an Idea

Introduction: what is DH, and does anyone care?

There is a whole genre of writing out there on the subject of “What is Digital Humanities?”. For some, this is an existential question, fundamental to the basis of research, teaching and the environment of those parts of the academy which exist between computing and the humanities. For others, it is a semantic curiosity, part of an evolution of terminology from “computing in the humanities” to “humanities computing”, finally arriving at “digital humanities” when the instrumentalist implications of the first two no longer encompassed the field of activities described. For others still, it is a relic of 1990s angst over terminology as computing began to permeate the academic environment. Whichever camp one is in, it behoves people, like me, with Digital Humanities in their job title to revisit the question from time to time. This post is an attempt at this, with a particular emphasis on the Department of Digital Humanities (DDH) at King’s College London. The strap-line of the present-day DDH is “critical inquiry with and about the digital”. In what follows, I hope to unpack what I think this means for the field, and for DDH, which has been my institutional home since 2006. Those fourteen years have seen immense changes, both in the Department and in the field of Digital Humanities (hereafter DH) more broadly. Furthermore, tomorrow (1st February) marks six months since I took over as Head of Department of DDH. Therefore, this seems as good a moment as any for a moment of autobiographically driven reflection. I state, of course, the usual disclaimers. Like any healthy academic environment, (D)DH is marked by a diversity of views, a diversity we pride ourselves on embracing and celebrating; and despite being Head of Department, I speak only for myself, in a very personal capacity. Also, any errors of fact or interpretation in what follows are mine and mine alone.

Before I arrived at King’s, I worked for the AHRC’s ICT in Arts and Humanities Initiative at the University of Reading (to the great credit of Reading’s web support services, the AHRC ICT programme’s web pages, complete with the quintessentially 1990s banner I designed, are still available). At the time, I was no doubt suffering a colossal intellectual hangover from my efforts to apply GIS to Bronze Age Aegean volcanic tephrachronology and its archaeological/cultural contexts, and this may have coloured my view of things; but the purpose of this programme was to scope how computing might change the landscape of the humanities, and to funnel public money accordingly. This is the kind of thing that the National Endowment for the Humanities in the US has done, to great acclaim, with its Office for Digital Humanities.

What I was not, at this point, was any kind of Digital Humanist. Working outside the Digital Humanities/Humanities Computing (both appellations have been used over time, but this is another story still), I recall some push-back to the application of digital methods and “e-infrastructure” from some less engaged with technology in their work, who were concerned about reductivism, and the suborning of discursive curiosity to the tyranny of calculation. I particularly recall the debates about GIS in archaeology: GIS, we were told, encouraged over-quantification and processualism, thus stifling discursive, human-centred interpretation of the past. That, at least, is how I remember the landscape when I came to work for the AHRC programme.

Keeping this in mind, let us return to the “what is DH” question. This has become more nuanced and more complex as the “Information Age” has spread and developed over the last thirty years or so. It is well worth remembering that in that in the last 20 years (at least), “the Digital” has impacted on “the Humanities” far beyond the circle of those who self-identify as Digital Humanists in myriad ways (even recognizing that it is a highly permeable circle in the first place). For many “the Digital” was once a convenient method of sending messages supported by university communication networks, which eventually gave way to suites of tools, and associated methods, which provoked new questions about the approach, methodology and even purpose of what we were doing.  Historically, many of these questions were (and are) reflected in the preoccupations of wider society as “the Digital” seeped into the praxes of everyday life. Much of the debate in the bits of academia I inhabited in the early 2000s was couched in terms of if, or how, digital technology would enable research to be done faster, more efficiently and over ever larger distances. There were even questions of computers taking over human roles and functions: perish that thought now. Taking an historical view provides a bigger and contingent picture for this: my own generation was raised in the 1980s on movies such as Terminator, Tron, Lawnmower Man and War Games, scenarios, sometimes dystopian ones, where semi-sentient machines take over the world. I have long argued to my students that it is no coincidence that the rise of “Internetworking”, and the communication protocols that enabled it, including Tim Berners-Lee’s invention of the WWW in 1989,  coincided with a genre of Hollywood movies about computers becoming better at intelligence than humans.

Enter DDH

The series of intellectual processes which led to the field of DH as we know it today thus unfolded against a backdrop of great change in society and culture, driven by computing technology. I come to the shape this gave more specifically to precursors of DH, such as “humanities computing” a little later. But these were, and are, certainly factors which have shaped the development of DH at King’s. Traversing several and various incarnations, “Humanities Computing” at King’s goes back to the early 1970s. There are traces of these times in the fabric of the environment today. If one walks from the present-day Department’s main premises on the 3rd floor of KCL’s Strand Building, down the main second floor corridor of the King’s Building  towards the refectory, in the bookshelves on the left hand side – amid Sir Lawrence Freedman’s library on the history of war and small display of Sir Charles Wheatstone’s scientific instruments, is a collection of volumes and conference proceedings that originate from CCH/DDH’s early activities (picture below).

DDH came in to being, by that name, in 2010. Prior to that, it was known as the “Centre for Computing in the Humanities”. It was established as an academic department in its own right in 2002. Harold Short, formerly director of CCH and the first Head of Department of DDH, wrote that:

The Centre for Computing in the Humanities (CCH) at King’s College London is a teaching department and a research centre in the College’s School of Humanities. Its current status was reached during the course of the 2001-2002 academic year, and may be seen as the natural outcome of a process that began in the 1970s.  ‘Humanities computing’ began at King’s in the early 1970s, with Computing Centre staff assisting humanities academics to generate concordances and create thesaurus listings. The arrival of Roy Wisbey as Professor of German gave the activity a particular boost. Wisbey had started the Centre for Literary and Linguistic Computing while at Cambridge (a Centre which is still in existence, with John Dawson as its Director). In 1973 the inaugural meeting of the Association for Literary and Linguistic Computing (ALLC) was held at King’s, and Wisbey was elected as its first Secretary. Although Wisbey did not feel it necessary to create a specific humanities computing center, he was Vice-Principal of the College in the late 1980s when a series of institutional mergers gave him the chance to propose the formation of a ‘Humanities and Information Management’ group in the restructured Computing Centre.

[Harold Short, July 2002 – reproduced with his permission]

 My central argument is that DDH’s story is one of evolution, development, and even – perhaps contrary to some appearances – continuity. This is an impression driven by the kind of question that we have always asked at King’s. As when I joined the Department fourteen years ago, two far more interesting questions than “what is Digital Humanities” were what is “the digital” for in the humanities; and how has “the digital” changed through time? For the last fifteen years have been a key period for “humanities done with the Digital”. Digital tools allow humanists to interrogate data more deeply, more thoroughly, with greater attention to the nuance between qualitative and quantitative data (which is far less grounded in the humanities than in the social sciences). Those questions are as interesting now as they were then; and they have only become more acute as the wider landscapes of technology, the humanities, and connective research have changed.

To text or not to text

When I joined King’s, perhaps even before, one of the first things I learned about the heritage (with a small “h”) of Humanities computing/Computing in the Humanities/Digital Humanities is that, particularly in the US, it had a long history of engagement with the world of text, and its natural home there, where it had one, lay in University English Departments. Text was certainly a low hanging fruit for the kind of qualitative and quantitative research that computing enabled. Many DH scholars trace the origins of the field to the life and work of Roberto Busa (1913 – 2011), the Jesuit priest whose scholarship in the 1950s on lemmatizing the writings of Thomas Aquinas resulted the Index Thomisticus using punch-card programming, which is widely regarded as the first major application of “Computing in the Humanities”. In the context of 1950s computing, the project was enabled by the epistemological inclination of text to lend itself to calculation: text, as a formal system of recording “data”, which is convertible in to information by the process of sentences and paragraphs, and thence in to knowledge by the process of reading, can be (fairly) unproblematically transferred to punch cards, then the principle form of storing data (this was a vast human undertaking, utilizing the skills of many skilled and unskilled operatives, many of them women, whose stories are now being re-told by the work of Melissa Terras and Julianne Nyhan. Matthew Kirschenbaum takes up this argument:

First, after numeric input, text has been by far the most tractable data type for computers to manipulate. Unlike images, audio, video, and so on, there is a long tradition of text based data processing that was within the capabilities of even some of the earliest computer systems and that has for decades fed research in fields like stylistics, linguistics, and author attribution studies, all heavily associated with English departments.

[Kirschenbaum, Matthew G. “What is digital humanities and what’s it doing in English departments?”, in M. terras, J. Nyhan and E. Vanhoutte (eds.) 2016: Defining Digital Humanities. Routledge: 213.]

CCH did a lot of work on text, but it did many other things besides. Even before I joined, I remember seeing CCH as a dynamic crucible of new thinking, a forum in which classicists, archaeologists, textual scholars, literary researchers, visualization experts, theatre academics, and more, could come together and speak some kind of common language about what they did, and especially how they developed, critiqued and used digital tools. This was a view shared by much of the rest of the world. Most visibly to me, it was recognized by the AHRC’s award of the grant that enabled me to come to King’s, the AHRC ICT Methods Network, at that time the biggest single award the AHRC (and its predecessor, the Arts and Humanities Research Board) had ever made. It was a revelation to me that such a place could even exist. It was certainly nothing like the environment I had known as a lonely GIS/Archaeology PhD researcher, working in a Department full of experts on Plato. It was therefore with great excitement that I joined CCH in January 2006, as a research associate in the Arts and Humanities E-Science Support Centre, which was attached to the Methods Network (what AHeSSC was and what it did is another long story, which is partly told in a much earlier post on this blog). It was just as exciting as an environment to work in as it had looked as an outsider.

Connective research

A delve into the Centre’s public communications at this time show how this collaborative spirit bought it to re-think what the humanities might mean in the Information Age. The Internet Archive’s Wayback Machine [a venerable resource now, having been launched back in 2001, but which is now an indispensable tool for much research on the history of the Web] took an imprint of CCH’s website on Saturday 14th January 2006, two days before my first day at work there. It has this to say about the Centre’s role. This seems to confirm the recollection above, that CCH was primarily an agent of collaborative research:

The Centre for Computing in the Humanities (CCH) is in the School of Humanities at King’s College London. The primary objective of the CCH is to foster awareness, understanding and skill in the scholarly applications of computing. It operates in three main areas: as a department with responsibility for its own academic programme; as a research centre promoting the appropriate application of computing in humanities research; and as a unit providing collegial support to its sister departments in the School of Humanities. As a research centre, CCH is a member of the Humanities Research Centres, the School’s umbrella grouping of its research activities with a specifically inter-disciplinary focus.

Despite the emphasis on collaborative research, one can see from this that CCH was also a place that did, very much, its own original thinking, grounded in the methods and thinking that were driving humanities computing at the time (see above). We can get a flavour of this by looking at the titles of the seminars it ran, which are still there for all to see in Wayback: “Choice in the digital domain – will copyright extend or stifle choice?”; “Adventures in Space and Time: Spatial and Temporal Information in the Digital Humanities”, “From hypermedia Georgian Cities to VR Jazz Age Montmartre: hyperlinks or seamlessness?”, and “The historian as aesthete: One scenario for the future of history and computing”. Some of these titles would not feel at all out of place in the seminar series of the DDH of today. Therefore, while the field, and the Department, have both (of course) changed significantly over the years, this suggests that there are some threads of continuity, as well as evolution, running through: innovation, responses to the challenges and opportunities of the digital, thinking through new approaches to the human record; and indeed what the “human record” is in the digital age. This, certainly, accounts for the first of the areas indicated above, in which CCH emerged as a department responsible for its own academic programme. What, I think, has changed most, is how the department collaborates.

The late 1990/early 2000s were a time of great change and innovation in DH, both technologically and institutionally. A 2011 Ithaka research report noted at the time that

In 2009 the Department of Digital Humanities (DDH), formerly known as the Centre for Computing in the Humanities (CCH), presented the model of a successful cross-disciplinary collective of digital practitioners engaged in teaching and research, with knowledge transfer activities and a significant number of research grants contributing to its ongoing revenue plan.

This highlights the fact that much of the Centre’s activity depended on income from externally funded research projects.  Ground-breaking collaborations with which CCH were involved, and in many cases led, still resonate: Henry III Fine Rolls, in musicology, and the Prosopography of Anglo-Saxon England all produced world-class original research, which contributed to a variety of areas, some directly in the associated humanities domains, some in CCH itself. Interdisciplinary collaboration was the lifeblood of the centre at this time.

It’s all about the method

One thread of continuity is what I would call “methodological emplacement”. That is to say, CCH/DDH has always had an emphasis on what it means to do Digital Humanities, the practice of method as well as the implementation of theory (whose theory is a key question). This, in itself, challenges “conventional” views of the humanities, inherently rooted in the theory and epistemology of particular ways of looking at the world. Among other things, it results in a willingness to deconstruct the significance of the research output – the monograph, the journal article, the chapter in the august collection. Providing a space to think beyond this, to consider the process, the method, and other kinds of output, in itself, lends itself interdisciplinary digital work. Arguably, the three strands to the CCH of the mid-2000s outlined above must, surely, constitute such a space, and is surely still a concern at the core of the present-day DH.

In a forthcoming handbook volume on research methods for the Digital Humanities that we are co-editing myself and my DDH colleague Dr Kristen Schuster develop the idea of methodology drawing from multiple theories:

The fact that each concept illustrates a matter of process rather than output from different perspectives is, we argue, telling of how badly we need to discuss research methods at large instead of research outputs. Considering research as a process, rather than an amorphous mass of activity behind a scholarly output, makes room for identifying crosscurrents in theories, platforms, infrastructures and media used by academics and practitioners – both in and beyond the humanities.

One can hardly expect a field which is concerned with an academic research programme in digital methods and their “appropriate application” in the humanities” not to change, and to expand the theoretical basis that underpins it, as the Information Age galloped on. My personal “year zero”, 2006, was only two years after Facebook was founded (February 2004), and two months before the launch of Twitter (March 2006), a year after Google branched in to mapping, and two years after the first widely-used adoption of the term “Web 2.0”. In our 2017 book, Academic Crowdsourcing in the Humanities: Crowds, Communities and Co-production, my colleague Mark Hedges and I argue that the mid-2000s were a transformative period in networked interactivity online, the time when movements like internet-enabled crowdsourcing have their origins (the word “crowdsourcing” itself was coined in 2006 in Wired magazine): DH was transformed, just like everything else.

Change in DH here, and everywhere else, continued apace. A key moment of the Department’s more recent history was 2015, when the King’s Digital Lab was established, providing an environment for much of the developer and analyst expertise that had previously resided in DDH, and before that CCH. Today the two centres have a closely symbiotic relationship, with KDL establishing a ground-breaking new agenda in the emerging field of Research Software Engineering for the humanities. This is far more than simply a new way of doing software engineering: rather, KDL’s work is providing new critical insights into the social and collaborative processes that underpin excellent DH research and teaching, establishing new ways of building both technology and method. The creation of KDL also underlines the fact that “DH at King’s” is no longer the preserve of a single Department or centre; rather DH is now a field which is receiving investment of time, energy, ideas and, yes, money across the institution.

With and About the Digital: from Busa to Facebook

Just as Busa’s work in the 1950s opened up text to new forms of interrogation by transferring it to the medium of punch cards where it could be automatically concordanced, so these developments open up the human cultural record, including its more recent manifestations,  to new kinds of interrogation and analysis. I do not think this should be a particularly controversial view. After all, my former employer, the AHRC, fully embraced this post-millennial epistemological shift in the humanities very explicitly with its “Beyond Text” strategic initiative, which ran from 2007 to 2012.  This was described as a “a strategic programme to generate new understandings of, and research into, the impact and significance of the way we communicate”, a response to the “increased movement and cross-fertilization between countries and cultures, and the acceleration of global communications”. The reality that the humanities themselves were changing in the face of a newly technological society is writ large.

This truth is not changing. In the eight years since the Beyond Text initiative finished, global communications, and the kinds of digital culture and society they enable have grown more complex, more pervasive and less subject to the control of any individual human authority or agency (with the possible exception of the Silicon Valley multinational giants) and, with the emergence of phenomena such as Fake News, ever more problematic. The digitalization (as opposed to digitization) of culture, and heritage, and politics, and communications – all the things it means to be human – has opened up new arrays of research questions and subjects, just as the digitalization of text did in the twentieth century. To put it another way, the expansion of “the Digital” has given DH the space to evolve. I believe it must embrace this change, while at the same time retaining and enriching the humanities-driven critical groundwork upon which it has always rested. 

Alan Turing himself said that “being digital should be of more interest than being electronic”. And so it has always been at DDH. Digital Humanists have always known this. The present strapline of the Department of Digital Humanities is “critical inquiry with and about the digital”. The prepositions “with” and “about” provide space for a multivocal approach, which includes both the work DDH(/CCH) has excelled at in the past, and that which it does now. Crucially in my view this enables them to learn from one another. Critical research with the digital” is, I would argue, exactly what Busa did, it is exactly what the English Laws, Prosopography and Fine Rolls project are.  At the same time, “critical research about the digital” recognises the reality that “the digital” itself has become a subject of research – the elements of society and culture (increasingly all of these, at least in the West) which is mediated by digital technology and environment.  As I finish my first six months as Head and look to the next, I want to see the Department continue to be a space which enables the co-equality of “with” and “about”.

Red Posts

For a number of years, I’ve driven from Berkshire (and before that, was driven as a child from London) to Dorset, south west England, to visit various family members. An intriguing feature of the drive is the “Red Post”, at Bloxworth, near Wareham. It is an otherwise ordinary rural finger-post but painted red, with the placenames picked out in white lettering.

There are other such posts dotted around Dorset and the south west of England (possibly elsewhere as well), including one which, when I went that way last year, had been vandalized, on the road between Beaminster and Evershot (below).

One theory I have head is that they mark the sites of gibbets/gallows, although I haven’t been able to find any documentary verification of this. Also, it seems a bit unlikely and impractical to bring felons to the middle of nowhere to execute them.

Some are marked with the legend “Red Post” on the nineteenth century OS six-inch series maps, but some (including the Bloxworth post above) are not. I’ve come across a general public Act of Parliament of 1753 consisting of

A BILL for Repairing, Amending, and Widening the ſeveral roads leading from the Red-Poſt, in the Pariſh of Fivehead, where the Taunton Turnpike ends, through the Pariſh of Curry Rivell, the Towns of Langport and Somerton, to Butwell … etc

This suggests some considerable antiquity to the phenomenon; and that in the eighteenth century, Red Posts were well known and visible as waymarkers. So I am digging a bit more. I have a few ideas, and I hope that 2020 will bring a slightly more formal publication on the matter, on one platform or another.

This is a plea, in the meantime, that if anyone familiar with the English countryside knows anything about its Red Posts/Poſts , then I’d love to hear from you…

A History of Place 3: Dead Trees and Digital Content

The stated aim of this series of posts is to reflect on what it means to write a book in the Digital Humanities. This is not a subject one can address without discussing how digital content and paper publication can work together. I need to say at the outset that A History of Place does not have any digital content per se. Therefore, what follows is a more general reflection of what seems to be going on at the moment, perhaps framing what I’d like to do for my next book.

It is hardly a secret that the world of academic publication is not particularly well set up for the publication of digital research data. Of course the “prevailing wind” in these waters is the need for high-quality publications to secure scholarly reputation, and with it the keys to the kingdom of job security, tenure and promotion. As long as DH happens in universities, the need to publish in order to be tenured and promoted is not going to go away  There is also the symbiotically related need to satisfy the metrics imposed by governments and funding agencies. In the UK for example, the upcoming Research Excellence Framework exercise explicitly sets out to encourage (ethically grounded) Open Access publication, but this does nothing to problematize the distinction, which is particularly acute in DH, between peer-reviewed research outputs (which can be digital or analogue) and research data, which is perforce digital only. Yet research data publication is a fundamental intellectual requirement for many DH projects and practitioners. There is therefore a paradox of sorts, a set of shifting and, at times, conflicting motivations and considerations, which those contemplating such are faced with.

It seems to be that journals and publishers are responding to this paradox in two ways. The first facilitates the publication of traditional articles online, albeit short ones, which draw on research datasets which are deposited elsewhere, and to require certain minimum standards of preservation, access and longevity. Ubiquity Press’s Journal of Open Archaeological Data, as the name suggests, follows this model. It describes its practice thus:

JOAD publishes data papers, which do not contain research results but rather a concise description of a dataset, and where to find it. Papers will only be accepted for datasets that authors agree to make freely available in a public repository. This means that they have been deposited in a data repository under an open licence (such as a Creative Commons Zero licence), and are therefore freely available to anyone with an internet connection, anywhere in the world.

In order to be accepted, the “data paper” must reference a dataset which has been accepted for accession in one of 11 “recommended repositories”, including, for example, the Archaeology Data Service and Open Context. It recommends that more conventional research papers then reference the data paper.

The second response is more monolithic, where a publisher takes on both the data produced by or for the publication, and hosts/mounts it online. One early adopter of this model is Stanford University Press’s digital scholarship project, which seeks to

[A]dvance a publishing process that helps authors develop their concept (in both content and form) and reach their market effectively to confer the same level of academic credibility on digital projects as print books receive.

In 2014, when I spent a period at Stanford’s Center for Electronic and Spatial Text Analysis, I was privileged to meet Nicolas Bauch, who was working on SUP’s first project of this type, Enchanting the Desert. This wonderful publication presents and discusses the photographic archive of Henry Peabody, who visited the Grand Canyon in 1879, and produced a series of landscape photographs. Bauch’s work enriches the presentation and context of these photographs by showing them alongside viewsheds of the Grand Canyon from the points where they were taken, this providing a landscape-level picture of what Peabody himself would have perceived.

However, to meet the mission SUP sets out in the passage quoted above requires significant resources, effort and institutional commitment over the longer term. It also depends on the preservation not only of the data (which JOAD does by linking to trusted repositories), but also the software which keeps the data accessible and usable. This in turn presents the problem encapsulated rather nicely in the observation that data ages like a fine wine, whereas software applications age like fish (much as I wish I could claim to be the source of this comparison, I’m afraid I can’t). This is also the case where a book (or thesis) produces data which in turn depends on a specialized third-party application. A good example of this would be 3D visualization files that need Unity or Blender, or GIS shapefiles which need ESRI plugins. These data will only be useful as long as those applications are supported.

My advice therefore to anyone contemplating such a publication, which potentially includes advice to my future self, is to go for pragmatism. Bearing in mind the truism about wine and fish, and software dependency, it probably makes sense to pare down the functional aspect on any digital output, and focus on the representational, i.e. the data itself. Ideally, I think one would go down the JOAD route, and have one’s data and deposit one’s data in a trusted repository, which has the professional skills and resources to keep the data available. Or, if you are lucky enough to work for an enlightened and forward-thinking Higher Education Institution, a better option still would be to have its IT infrastructure services accession, publish and maintain your data, so that it can be cross-referred with your paper book which, in a wonderfully “circle of life” sort of way, will contribute to the HEI’s own academic standing and reputation.

One absolutely key piece of advice – probably one of the few aspects of this, in fact, that anyone involved in such a process would agree on – is that any Universal Resource Indicators you use must be reliably persistent. This was the approach we adopted in the Heritage Gazetteer of Cyprus project, one of whose main aims was to provide a structure for URI references to toponyms that was both consistent and persistent, and thus citable – as my colleague Tassos Pappacostas demonstrated in his online Inventory of Byzantine Churches on Cyprus, published alongside the HGC precisely to demonstrate the utility of persistent URIs for referencing. As I argue in Chapter 7 of A History of Place in fact, developing resources which promote the “citability” of place, and link the flexibility of spatial web annotations with the academic authority of formal gazetteer and library structures is one of the key challenges for the spatial humanities itself.

I do feel that one further piece of advice needs a mention, especially when citing web pages rather than data. Ensure the page is archived using the Internet Archive’s Wayback Machine, then cite the Wayback link, as advocated earlier this year here:

This is very sound advice, as this will ensure persistence even the website itself depreciates.

Returning to the publication of data alongside a print publication however: the minimum one can do is simply purchase a domain name and publish the data oneself, alongside the book. This greatly reduces the risk of obsolescence, keeps you in control, and recognizes the fact that books start to date the moment they are published by their very nature.

All these approaches require a certain amount of critical reduction of the idea that publishing a book is a railway buffer which marks the conclusion of a major part of one’s career. Remember – especially if you are early career –  that this will not be the last thing you ever publish, digitally or otherwise. Until those bells and whistles hybrid digital/paper publishing model arrive, it’s necessary to remember that there are all sorts of ways data can be preserved, sustained and form a valuable part of a “traditional” monograph. The main thing for your own monograph is to find the one that fits, and it may be that you have to face down the norms and expectations of the traditional academic monograph, and settle for something that works, as opposed to something that is perfect.

New posts at DDH

 

King’s College London is recruiting Lecturers and a Senior Lecturer in the Department of Digital Humanities. Lecturers are the UK equivalent of Assistant Professors and Senior Lecturers correspond to Associate Professors in the US system.

King’s College London is in the fourth year of making a significant investment in the Department of Digital Humanities as part of an ambitious programme of growth and expansion in existing and emergent research areas and student numbers across its five Master programmes and the BA Digital Culture.

King’s College London has a long tradition of research in the Digital Humanities, going back to the early 1990s. King’s is one of the few places in the world where students at all levels can pursue a wide range of inter-disciplinary study of the digital (https://www.kcl.ac.uk/ddh/about/about).

We are seeking to recruit exceptional candidates to join the Department no later than 1st September, who can enthuse and inspire our students, conduct world-leading research, and contribute to the life and reputation of the Department through academic leadership and public engagement.

We are hiring for

Closing date: 28 April 2019

Lecturer candidates will be on their way to becoming scholars of international standing with a research and publication trajectory that illustrates this ambition. They will contribute to the further development of the Department¹s research strengths, provide high-quality teaching and supervision, and work collaboratively within the Department and beyond.

Senior Lecturer candidates will be scholars of international standing with a strong research and publication record and evidence of or potential for research income generation. The successful applicants will play a key role in leading work across the Department to enhance our research strengths, to develop new and emergent research areas, to innovate in teaching practice and pedagogy, and to contribute to our underpinning values of co-research and collaboration.

A History of Place 2: Indexing

I opted to compile the index of A History of Place myself. I made this choice for various reasons, but the main one was that the index seemed to me to be an important part of the volume’s framing and presentation. Reflecting on this, it seems a little ironic, as in some ways a book’s index exemplifies the age of the pre-digital publication. Using someone’s pre-decided terms to navigate a text is antithetical to the expectations and practices of our Googleized society. Let’s face it, no one reading the e-version of A History of Place is ever going to use the index, and in some ways compiling the index manually, reviewing the manuscript and linking key words with numbers which would, in due course, correspond with dead-tree pages felt almost like a subversive act.

But like an expertly curated library catalogue, an expertly compiled index is an articulation of a work’s structure and requires a set of decisions that are more complex than they may at first seem. These must consider the expectations and needs of your readers, and at the same time reflect, as accurately as possible, the current terminologies of your field. The process of indexing made me realize it gives one a chance (forces one in fact) to reflect – albeit in a bit of a hurry – on the key categories, terminology and labels that oneself and one’s peers use to describe what they do. It thus forces one to think about what terms mean, and which are important – both to one’s own work, and to the community more broadly (some of whom might even read the book).

There is also the importance of having a reliable structure. As I outline in the book itself, and have written elsewhere in relation to crowdsourcing, some have argued that using collaborative (or crowdsourced) methods to tag library catalogues for the purposes of searching and information retrieval disconnects scholarly communities from the ‘gatekeepers of the cultural record’, which undermines the very idea of the academic source itself (Cole & Hackett, 2010: 112–23) [1]. Cole and Hackett go on to highlight the distinction between “search” and “research”; whereby the former offers a flat and acrticial way into a resource (or collection of resources) based on user-defined keywords, whereas the latter offers a curated and grounded “map” of the resource. While, in this context, Cole and Hackett were talking about library catalogues, exactly the same principle applies book indices.

I don’t wish to overthink what remains, after all, a rather unglamorous part of the writing process; however even in the digital age, the index continues to matter. Even so, there is no shame at all in busy academics (or any other writers) delegating the task of compiling an index to a student or contract worker, provided of course that person is fully and properly paid for their efforts, and not exploited.  But I think it is necessary to have a conversation with that person about strategy and decision making. What follows is some examples from A History of Place which exemplify issues which authors might wish to consider when approaching their index, and/or discussing with their indexer.  By discussing these examples I try to explore the decisions I made about which terms and sub-terms I decided to include, and why.

To begin with the practicalities, the wise advice provided by Routledge was:

You don’t have to wait for the numbered page proofs of your book to arrive – start to think about entries when you have completed the final draft of your typescript. The index is always the last part of the book to be put together and submission of your final copy will be subject to a tight deadline. Preparing it now may save you time later on. [emphasis added]

I would suggest that it is a good idea to think about these even before the numbered proofs turn up.

And then

On receipt of [numbered proofs], you should return to your already-prepared list of words. Use the numbered proofs to go through your book chapter by chapter and insert the page numbers against each entry on your list. (You can use the ‘Find’ function to locate words within the proof PDF.)

The gap between compiling your original list and adding page numbers will help you to evaluate your designated entries once more. Have you missed anything obvious? Are your cross references accurate and relevant? Revisit the questions under the heading ‘Choice of Entry’.

When you are satisfied that your index is complete, put it into alphabetical order.

You will come to love proper names in the early stages of this process. For example there is only one way you can represent Abraham Ortelius, or Tim Berners-Lee in your index; and no decisions involved in how to define the page limits for the references to them.

However, the process of selecting abstract terms for inclusion is more challenging. There were arguments both for and against including the word “Bias”, for example.  All maps are biased of course, and in theory this could have applied to most of the examples I discuss. However, it forms an important topic of much recent literature on neogeography (for example), which address the ways in which neogeographic platforms perpetuate social bias due to their demographies (mostly white, male, Western etc). Therefore, inclusion made sense as it referenced explicit discussion of bias in secondary literature (mostly in the chapter on neogeography). It was possible to connect this to “collective bias” via the cross-referencing option of “see also”, of which Routledge advises:

  • See

If the entry is purely a cross reference, the entry is followed by a single space, the word ‘see’ in italics and the cross reference. For example:

sensitivity see tolerances

Note that under the entry for ‘tolerances’ there is no cross reference back to ‘sensitivity’. Page numbers should not be stated where ‘see’ is used.

  • See also

This should be used to direct the reader to additional related information.

This is a useful distinction, because it forces one to consider whether terms are synonymous versus relevant.  “Bias” and “collective bias” is a good example as the original term is somewhat fluid and required some pre-hoc consideration but is clearly different from “collective bias”.

Highly specific and specialized terms presented less of a problem.  Chorography, for example, features prominently in my index, but it could potentially have had any number of “see also…” references. However, given it is such a specialized term, I made a pragmatic decision (based partly on what I thought a reader using the index would need/want) to have it simply standalone, with no cross-references at all.

The most challenging terms were the big, important ones with multiple potential meanings. “GIS” is probably the most obvious example for A History of Place. Most of my arguments touch in some way on how spatial thinking in the humanities has emerged from, and been shaped by, GIS and related technologies, so the challenge was to divide the term up in to subsections which are a) useful for a potential reader, and b) reflective of disciplinary practices. My strategy was to treat branches of GIS which have been explicitly recognized and differentiated in the literature – such as Critical GIS; Qualitative GIS; Participatory GIS Historical GIS and Literary GIS – as separate index terms, linked as “see also” references. These are then tied only to specific occurrences of that term in each case. For discussions of GIS not explicitly relating to those terms, I used “and…” references which were tied to my chapter themes. This enabled me to divide the myriad references to GIS into sections which accord logically with the book’s structure – “- and archaeology” “- and and spatial analysis”, “-and text”, “-and crowdsourcing” and so on.

“Neogeography” created similar problems, but this type of term is compounded when the field moves so quickly. A recent paper by Linda See and others illustrates just how difficult this term is to pin down. I think all I can draw from this is that such index terms will need some considerable revisiting in the event of there being any future editions(!).

So, the agenda for that initial conversation with your indexer should, I would suggest, include:

  • Strategies for dealing with abstract terms, and deciding which are relevant and which are not
  • Highlight important, wide ranging terms, and what sub-categories you think they should have
  • How to identify specific terms which may or may not need “see also” references
  • Which sort of circumstances demand you to signpost between related terms using the “see” option.
  • Flag terms – for your won reference if nothing else – that may not be easily “future proofed”.

 

[1] Cole, R., & Hackett, C. (2010). Search vs. Research: Full-text repositories, granularity and the concept of “source” in the digital environment. In C. Avery & M. Holmlund (Eds.), Better off forgetting? Essays on archives, public policy and collective memory (pp. 112–123). Toronto.