Call for members: Major new Institute opens at King’s College London with Getty Foundation support

The Project

The 18-month Institute in Digital Art History is led by King’s College London’s Department of Digital Humanities (DDH) and Department of Classics, in collaboration with HumLab at the University of Umeå, with grant support provided by the Getty Foundation as part of its Digital Art History initiative.

It will convene two international meetings where Members of the Institute will survey, analyse and debate the current state of digital art history, and map out its future research agenda. It will also design and develop a Proof of Concept (PoC) to help deliver this agenda. The source code for this PoC will be made available online, and will form the basis for further discussions, development of research questions and project proposals after the end of the programme.

To achieve these aims we will bring together leading experts in the field to offer a multi-vocal and interdisciplinary perspective on three areas of pressing concern to digital art history:

●       Provenance, the meta-information about ancient art objects,

●       Geographies, the paths those objects take through time and space, and

●       Visualization, the methods used to render art objects and collections in visual media.

Current Digital Humanities (DH) research in this area has a strong focus on Linked Open Data (LOD), and so we will begin our exploration with a focus on LOD. This geographical emphasis on the art of the ancient Mediterranean world will be continued in the second meeting to be held in Athens. The Mediterranean has received much attention from both the Digital Classics and DH communities, and is thus rich in resources and content. The programme will, therefore, bring together two existing scholarly fields and seek to improve and facilitate dialogue between them.

We will assign Members to groups according to the three areas of focus above. These groups will be tasked with producing a detailed research specification, detailing the most important next steps for that part of the field, how current methods can best be employed to make them, and what new research questions the participants see emerging.

The meetings will follow a similar format, with initial participant presentations and introductions followed by collaborative programme development and design activities within the research groups, including scoping of relevant aspects of the PoC. This will be followed by further discussion and collaborative writing which will form the basis of the event’s report. Each day will conclude with a plenary feedback session, where participants will share and discuss short reports on their activities. All of the sessions will be filmed for archival and note-taking purposes, and professional facilitators will assist in the process at various points.

The scholarly outputs, along with the research specifications for the PoC, will provide tangible foci for a robust, vibrant and sustainable research network, comprising the Institute participants as a core, but extending across the emerging international and interdisciplinary landscape of digital art history. At the same time, the programme will provide participants with support and space for developing their own personal academic agendas and profiles. In particular, Members will be encouraged to and offered collegial support in developing publications, both single- and co-authored following their own research interests and those related to the Institute.

 

The Project Team

The core team comprises of Dr Stuart Dunn (DDH), Professor Graeme Earl(DDH) and Dr Will Wootton (Classics) at King’s College London, and Dr Anna Foka of HumLab, Umeå University.

They are supported by an Advisory Board consisting of international independent experts in the fields of art history, Digital Humanities and LOD. These are: Professor Tula Giannini (Chair; Pratt Institute, New York), Dr Gabriel Bodard (Institute of Classical Studies), Professor Barbara Borg (University of Exeter), Dr Arianna Ciula (King’s Digital Laboratory), Professor Donna Kurtz (University of Oxford), and Dr Michael Squire (King’s College London).

 

Call for participation
We are now pleased to invite applications to participate as Members in the programme. Applications are invited from art historians and professional curators who (or whose institutions) have a proven and established record in using digital methods, have already committed resources, or have a firm interest in developing their research agendas in art history, archaeology, museum studies, and LOD. You should also be prepared to contribute to the design of the PoC (e.g. providing data or tools, defining requirements), which will be developed in the timeframe of the project by experts at King’s Digital Lab.

Membership is open to advanced doctoral students (provided they can demonstrate close alignment of their thesis with the aims of the programme), Faculty members at any level in all relevant fields, and GLAM curation professionals.

Participation will primarily take the form of attending the Institute’s two meetings:

King’s College London: 3rd – 14th September 2018

Swedish Institute at Athens: 1st-12th April 2019

We anticipate offering up to eighteen places on the programme. All travel and accommodation expenses to London and Athens will be covered. Membership is dependent upon commitment to attend both events for the full duration.

Potential applicants are welcome to contact the programme director with any questions: stuart.dunn@kcl.ac.uk.

To apply, please submit a single A4 PDF document set out as follows. Please ensure your application includes your name, email address, institutional affiliation, and street address.


Applicant Statement (ONE page)
This should state what you would bring to the programme, the nature of your current work and involvement of digital art history, and what you believe you could gain as a Member of the Institute. There is no need to indicate which of the three areas you are most interested in (although you may if you wish); we will use your submission to create the groups, considering both complementary expertise and the ability for some members to act as translators between the three areas.

Applicant CV (TWO pages)
This section should provide a two-page CV, including your five most relevant publications (including digital resources if applicable).

Institutional support (ONE page)
We are keen for the ideas generated in the programme to be taken up and developed by the community after the period of funding has finished. Therefore, please use this section to provide answers to the following questions relating to your institution and its capacity:

1.     Does your institution provide specialist Research Software Development or other IT support for DH/LOD projects?

2.     Is there a specialist DH unit or centre?

3.     Do you, or your institution, hold or host any relevant data collections, physical collections, or archives?

4.     Does your institution have hardware capacity for developing digital projects (e.g. specialist scanning equipment), or digital infrastructure facilities?

5.     How will you transfer knowledge, expertise, contacts and tools gained through your participation to your institution?

6.     Will your institution a) be able to contribute to the programme in any way, or b) offer you any practical support in developing any research of your own which arises from the programme? If so, give details.

7.     What metrics will you apply to evaluate the impact of the Ancient Itineraries programme a) on your own professional activities and b) on your institution?

Selection and timeline
All proposals will be reviewed by the Advisory Board, and members will be selected on the basis of their recommendations.

Please email the documents specified above as a single PDF document to stuart.dunn@kcl.ac.uk by Friday 1st June 2018, 16:00 (British Summer Time). We will be unable to consider any applications received after this. Please use the subject line “Ancient Itineraries” in your email. 

Applicants will be notified of the outcomes on or before 19th June 2018.

Privacy statement

All data you submit with your application will be stored securely on King’s College London’s electronic systems. It will not be shared, except in strict confidence with Advisory Board members for the purposes of evaluation. Furthermore your name, contact details and country of residence will be shared, in similar confidence, with the Getty Foundation to ensure compliance with US law and any applicable US sanctions. Further information on KCL’s data protection and compliance policies may be found here: https://www.kcl.ac.uk/terms/privacy.aspx; and information on the Getty Foundation’s privacy policies may be found here: http://www.getty.edu/legal/privacy.html.

Your information will not be used for any other purpose, or shared any further, and will be destroyed when the member selection process is completed.

If you have any queries in relation to how your rights are upheld, please contact us at digitalhumanites@kcl.ac.uk, or KCL’s Information Compliance team at info-compliance@kcl.ac.uk).

Littleworth to Faringdon Corpse Path

More on corpse roads, or rather a corpse road. As noted in my previous post, there is plenty of circumstantial evidence for the psychological power of corpse roads, or corpse paths. They connected communities, and provide tantalizing insight into the importance humans attach to movement through the landscape at critical points of their lives. But hard facts are hard to come by. In 1928, the journal Folklore published a Correspondence from “Wm Self. Weeks” entitled Public Right of Way Believed to be Created by the Passage of a Corpse [Shibboleth login needed]. Having debunked this actual notion itself (with reference to an earlier Q&A in the journal Justice of the Peace, a “legal journal mainly devoted to matters affecting Magistrates and Local Authorities generally”), Weeks goes on to propound the theory that the idea of the corpse path comes from an agricultural practice of deliberately leaving a strip unploughed along a field’s edge, to allow the carriage of bodies: ”bier balks … wider strips of turf left between the ploughed strips of land in certain places expressly for funeral ways” (p. 935).  Weeks quotes in support of this view correspondence in the Times Literary Supplement, in response to a previous letter on the subject he published there ten years previously. This came from L. R. Phelps of Oriel College, Oxford. Phelps writes:

In many parishes the church path is a familiar feature. Where I knew it best, at Littleworth, in Berkshire [now Oxfordshire], it connected an outlying hamlet with its parish church at Farringdon [sic], some two miles off. The characteristic of a ‘church path’ is that it is never ploughed over, but stands out from the field, hard and dry, and of a width sufficient to allow the bearers of a coffin to walk abreast along it.

Faringdon is not too far away from me so recently, along with the lady in my life (whose capacity for accommodating her husband’s more oddball obsessions is quite remarkable; she even provided the salmon sandwiches), I drove up to take a look at this path which Phelps was aware of, apparently from personal recollection. A public right of way still exists between Littleworth and Faringdon, as can be seen from the data provided by Oxfordshire County Council, and made available as KML by Barry Cornelius on his thoroughly excellent Rights of Way maps website:

faringdon
Credit: Google Earth; http://www.rowmaps.com

Faringdon, and its C11th church, is at the western end of the path. A 2014 archaeological survey of the environs, in preparation for new facilities being built at the church, found evidence for 341 burials of a range of ages. Of course one cannot easily tell from such evidence the places of death, and thus where the bodies were borne from, but in their conclusion the investigators note:

20170917_141836
Faringdon Church

“The community of believers excavated at All Saints comprised a broader church. Although no clearly high-status individuals were recovered, the investigation revealed a broad demographic section through the population of men and women, children and adults. The excavation showed the degree of care attached to the ill and dying, as well as concern for the well-being of the dead. The prosaic realities of country life and death from the late medieval to 19th century were revealed by the work carried out All Saints, Faringdon.”

As for the path from Littleworth: two clues immediately support Phelps’s recollection. At Littleworth itself, a footpath sign pointing west at the edge of the village indicates “Church Walk” to Faringdon (below left). Secondly, just outside of Faringdon itself is “Church Path Farm” – also labelled thus on Google Earth – the site of which includes a curious chapel-like outbuilding, with Gothic-style arching (below, right).

                                     littleworthpix

The walk itself – flat, easy going, suitable for the transportation of a load – crosses a total of four fields. The easternmost is large and flat, recently harvested of (I think) kale. The pathway adopts a slightly different tangent to the plough lines (see pic below) – which might be of some 20170917_124312significance, as these at least should preserve the orientation of the field. The slight difference between the path and the orientation suggests this field might have been sub-dived in to strips of a more south-westerly orientation than it is today, and that the path thus pre-dates it. The most obvious reason for this would be a change to accommodate the construction of London Street to the south, which connects Faringdon to the A420.

The westernmost section of the route, from Grove Lodge to Church Path Farm is characterized by deep, old-looking hedgerows – see picture below, left (from Church Path Farm, the route down into Faringdon itself is a modern metalled road, obviously optimized for vehicles with the hedgerows accordingly removed). It is these that make me think that Weeks and Phelps might have been on to something. These hedgerows do not simply follow the line of the field boundary as the path does to the east (below), but they delineate a gap between the fields themselves. This would be consistent with a route deliberately left to enable the passage of a bier party – a “bier balk”.

 

 

20170917_135659-1                                                           

 

20170917_121015
Path heading north from Grove Lodge

The Oxfordshire/Cornelius map above indicates another Right of Way heading north from Church Walk at Grove Lodge. This path is still clearly visible in the landscape, skirting east to avoid Grove Wood (see image above). Could this have served as another corpse path, perhaps linking the settlements of Thrupp and/or Radcot with the church at Faringdon? Maybe, but the point is that the “bier balk” hedgerows appear on Church Walk only at this point, and head west, past Church Path Farm and down to the town. Could it be therefore that only the section of the path *near* the settlement, which leads down to the churchyard itself, had the characteristics of a corpse path, and that the Shakespearean notion (see previous blog post) of dedicated pathways stretching across the pre-Enclosure countryside are a literary device?

I think this is probably so. There are exceptions – for example the Mardale to Schap route in Cumbria has a number of attestations, and is over six miles long, but I wonder if this is something to do with the extreme remoteness of the area and moorland topography – the “vital” needs to convey bodies for burial, and the challenges for doing so, stands out more in such conditions. But elsewhere, they were short sections of routes, proximal to the church, and would have been the aggregated path of more than one route for bier parties – which would be consistent with local people attaching significance accordingly.

 

Research questions, abstract problems – a round table on Citizen Science

I recently participated in a round-table discussion entitled “Impossible Partnerships”, organized by The Cultural Capital Exchange at the Royal Institution, on the theme of Citizen Science; the Impossibe Partnerships of the title being those between the academy and the wider public. It is always interesting to attend citizen science events – I get so caught up in the humanities crowdsourcing world (such as it is) that it’s good to revisit the intellectual field that it came from in the first place. This is one of those blog posts whose main aim is to organize my own notes and straighten my own thinking after the event, so don’t read on if you are expecting deep or profound insights.

20170221_173602

Crucible of knowledge: the Royal Institution’s famous lecture theatre

Galaxy Zoo of course featured heavily. This remains one of the poster-child citizen science projects, because it gets the basics right. It looks good, it works, it reaches out to build relationships with new communities (including the humanities), and it is particularly good at taking what works and configuring it to function in those new communities. We figured that one of the common factors that keeps it working across different areas is its success in tapping in to intrinsic motivations of people who are interested in the content – citizen scientists are interested in science. There is also an element of altruism involved, giving one’s time and effort for the greater good – but one point I think we agreed on is that it is far, far easier to classify the kinds of task involved, rather than the people undertaking them. This was our rationale in that 2012 scoping study of humanities crowdsourcing.

A key distinction was made between projects which aggregate or process data, and those which generate new data. Galaxy Zoo is mainly about taking empirical content and aggregating it, in contrast, say, to a project that seeks to gather public observations of butterfly or bird populations. This could be a really interesting distinction for humanities crowdsourcing too, but one which becomes problematic where one type of question leads to the other. What if content is processed/digitized through transcription (for example), and this seeds ideas which leads to amateur scholars generating blog posts, articles, discussions, ideas, books etc… Does this sort of thing happen in citizen science (genuine question – maybe it does).  So this is one of those key distinctions between citizen science and citizen humanities. The raw material of the former is often natural phenomena – bird populations, raw imagery of galaxies, protein sequences – but in the latter it can be digital material that “citizen humanists” have created from whatever source.

Another key question which came up several times during the afternoon was the nature of science itself, and how citizen science relates to it. A professional scientist will begin an experiment with several possible hypotheses, then test them against the data. Citizen scientists do not necessarily organize their thinking in this way. This raises the question: can the frameworks and research questions of a project be co-produced with public audiences? Or do they have to be determined by a central team of professionals, and farmed out to wider audiences? This is certainly the implication of Jeff Howe’s original framing of crowdsourcing:

“All these companies grew up in the Internet age and were designed to take advantage of the networked world. … [I]t doesn’t matter where the laborers are – they might be down the block, they might be in Indonesia – as long as they are connected to the network.

Technological advances in everything from product design software to digital video cameras are breaking down the cost barriers that once separated amateurs from professionals. … The labor isn’t always free, but it costs a lot less than paying traditional employees. It’s not outsourcing; it’s crowdsourcing.”

So is it the case that citizen science is about abstract research problems – “are golden finches as common in area X now as they were five years ago?” rather than concrete research questions – “why has the population of golden finches declined over the last five years?”

For me, the main takeaway was our recognition citizen science and “conventional” science is not, and should not try to be, the same thing, and should not have the same goals. The important thing in citizen science is not to focus on the “conventional” scientific out comes of good, methodologically sound and peer-reviewable research – that is, at most, an incidental benefit – but on the relationships between professional academic scientists and non-scientists it creates; and how these can help build a more scientifically literate population. The same should go for the citizen humanities. We can all count bird populations, we can all classify galaxies, we call all transcribe handwritten text, but the most profitable goal for citizen science/humanities is a more collaborative social understanding of why doing so matters.

Tales of many places: Data Infrastructure for Named Entities

The use of computational methods for ancient world geography are still very much dominated by the URI based gazetteer. These powerful and flexible reference lists, trail-blazed by projects such as the Pleaides and Pelagios projects, allow resources to be linked by common spatial referents they share. However, while computers love URIs unconditionally, the relationship they have with place is more ambivalent: a simmering critical tension which has given rise to what we call the Spatial Humanities. This critical tension between the ways humanists see place and the way computers deal with it has highlighted important geo-philosophical principles for the study of the ancient world. For me, one of the most important of these is the principle that places as entities which exist in some form of human discourse such as text, and places as locations which can be situated within the (modern) framework of latitude and longitude, must be separated. Gazetteers allow us to do this, which is why they are so important.

My 2017 kicked off with a meeting in a snowy Leipzig (see above), Digital Infrastructure for Named Entities Data, which sought to further problematize the use of these computational methods to support the investigation of past place. As might be expected of an event driven by Pelagios, the use of URI-based gazetteers featured heavily. The Pleagios Commons was presented by the event’s organizer, Chiara Palladino, as both a community and an infrastructure. It centres on the general concept of “place”, and clusters of material which share the same properties. Pelagios may be seen, Chiara said, as the “Connecting structure behind the system”, aiming at a decentralized and federated approach to provide maps which combine geographical, chronological and biographical data. The event’s exploration of this key, overarching concept highlighted three main issues:

  1. Hodological views of past space

Ancient geographies should be seen in the context of hodological space – as pathways through the world, not points on top of it. Hodology, a concept discussed by several speakers, views space from the perspective of experience and mobility.  Hodological space concerns the tension between intent, possibility, and real (embodied) experience. It is frequently bidimensional, as evidenced in the example given by Sergio Brilliante, of western Crete in the Periplus (mariner’s account) of Pseudo-Skylax, which displayed the best route for travel, not the cartographically optimal one. I was struck by the modern parallel of the WWII Cretan “runner”, George Psychoundakis, who, in his riveting account of his role in the resistance in Crete, measured the distances over which his wartime missions took him on foot by the number of cigarettes he smoked on the journey.

It was noted that in Arabic scripts, geographic areas are generally not measured, except for the purposes of agriculture. A hodological approach was described as a counterpoint to “scientific method” in geography: one can frame geographic accuracy either in terms of “accurate” Cartesian maps, or as the consistent application of geo criteria.

  1. Name neutrality

Like any form of humanistic space, hodological space is never neutral. Place references in humanistic discourse are often the result of mutivocal, multi-authorial and partial accounts; and the workshop bore a heavy emphasis on this. Many surviving Classical texts are written by Greek or Athenian authors, so there is a strong Athenocentricism and Graecocentricism to them. Non-Greeks tend to be “hidden”. This seemed to me somewhat reminiscent of the Mercator projection (which most modern Web cartography relies upon), which “shrinks” mid-latitude countries and accentuates those at higher and lower latitudes, thus visually privileging the developed world at the expense of the developing countries (who could forget the scene in the West Wing when the Cartographers for Social Equality regale CJ Cregg on the subject). Similarly toponyms are not neutral, a problem which the separating of platial concept and platical location can help address. Our own Heritage Gazetteer of Cyprus is attempting to do this through application of “attestations” of agnostic geographic entities, an approach also being used by  Sinai Rusinek in her Hebrew gazetteer. Similarly Thomass Carlson described the Syriaca.org gazetteer, which links cultural heritage to texts in the Syriac language. Carlson noted that names are a linguistic strategy not absolute entities.  The nature of names means that disambiguation does not work consistently. Even an expert reader might not be able to determine out what exactly a toponym refers to. While many ancient world gazetteers rely on URIs, URIs can never replace unambiguous linguistic names.  Context free URIs, which the gazetteer community has long relied on, are no longer sufficient to represent non-neutral humanistic place.

  1. Ontological (mis)alignment

Finally, a point well made by Maurizio Lana was that geographical ontologies must be bottom up to be truly representative. In his presentation he described the Geolat project, which deals with the use of spatial ontologies, and again frames names as cultural patterns. There is a driving force which pulls readers towards names, to what is easily identifiable. It is necessary to separate the study of entities from naming. This means that an ontology that is developed for one purpose might not be suitable for others. For example, in the Heritage Gazetteer of Cyprus we make use of Geonames as a means of locating archaeological entities, but the Feature Type list of Geonames is not nearly detailed or granular enough to adequately describe the different kinds of features which exist in the gazetteer. Therefore where geo-ontologies have come from, and why they do not align, can lead to very interesting conclusions about the nature of historical spatial structures.

As often, there was a great background discussion with colleagues who were not physically present via Twitter, which I have captured as a raw Storify. Among the most engaging of these discussions was an exchange as to whether a place had to have a name, or rather whether place acts as a conceptual container for events (in which case what are they?). My previous belief in the former position found itself severely tested by this exchange, and the papers which touched on hodological views of the past provided reinforcements. I think I am now a follower of the latter view. Thank you to those Twitter friends for this, you know who you are.

Talking to ourselves: Crowdsourcing, Boaty McBoatface and Brexit

Back in April, I gave a talk at a symposium entitled Finding New Knowledge: Archival Records in the Age of Big Data in Maryland called “Of what are they a source? The Crowd as Authors, Observers and Meaning-Makers”. In this talk I made the point that 2016 marked ten years since Jeff Howe coined the term “crowdsourcing” as a pastiche of “outsourcing” in his now-famous Wired piece. I also talked about the saga of “Boaty McBoatface”, then making headlines in the UK. If you recall, Boaty McBoatface was the winner, with over 12,000 votes, of the Natural Environmental Research Council’s open-ended appeal to “the crowd” to suggest names for its new £200m polar research ship, and vote on the suggestions. I asked if the episode had anything to tell us about where crowdsourcing had gone in its first ten years.  Well, we had a good titter at poor old NERC’s expense (although in fairness I did point out that, in a way, it was wildly successful as a crowdsourcing exercise – surely global awareness of NERC’s essential work in climatology and polar research has never been higher). In my talk I suggested the Boaty McBoatface episode was emblematic of crowdsourcing in the hyper-networked age of social media. The crowdsourcing of 2006 was based, yes, on networks, enabled by the emerging ubiquity of the World Wide Web, but it was a model where “producers” – companies with T-Shirts to design (Howe’s example), astrophysicists with galaxy images to classify (the Zooniverse poster child of citizen science), or users of Amazon Mechanical Turk put content online, and entreated “the crowd” to do something with it. This is interactivity at a fairly basic level. But the 2016 level of web interactivity is a completely different ball game, and it is skewing attitudes to expertise and professionalism in unexpected and unsettling ways.

The relationship between citizen science (or academic crowdsourcing) and “The Wisdom of Crowds” has always been a nebulous one. The earlier iterations of Transcribe Bentham, for example, or Old Weather, are not so much exercises in crowd wisdom, but perhaps “crowd intelligence” – the execution of intelligent tasks that a computer could not undertake. These activities (and the numerous others I examined with Mark Hedges in our AHRC Crowd-Sourcing Scoping Survey four years) ago all involve intelligent decision making, even if it is simply an intelligent decision as to how a particular word in Bentham’s papers should be transcribed. The decisions are defined and, to differing degrees, constrained by the input and oversight of expert project members, which give context and structure to those intelligent decisions: a recent set of interviews we have conducted with crowdsourcing projects have all stressed the centrality of a co-productive relationship between professional project staff and non-professional project participants (“volunpeers”, to use the rather wonderful terminology of the Smithsonian Transcription Center’s initiative).

However events since April have put the relationship between “the crowd” and “the expert” on to the front pages on a fairly regular basis. Four months ago, the United Kingdom voted by the small but decisive margin of 51.9% to 48.1% to exit the European Union. The “Wisdom of [the] Crowd” in making this decision informed much of the debate in the run up to the vote, with the merits of “crowd wisdom” versus “expert wisdom” being a key theme. Michael Gove, a politician who turned out to be too treacherous even for a Conservative party leadership election, famously declared that “Britain has had enough of experts”. It is a theme that has persisted since the vote, placing the qualification obtained from the act of representing “ordinary people” through election directly over, say, the economic expertise of the Governor of the Bank of England:

Is this fault line between the expert and the crowd real, a social division negotiated by successful academic crowdsourcing projects, or is it merely a conceit of divisive political rhetoric?  Essentially, this is a question of who “produces” wisdom, and who “consumes” it, and in which direction do the cognitive processes which lead to decision making flow (and which way should they flow?). This highlights the nebulous and inexact definition of “the crowd”. It worked pretty well ten years ago when Howe wrote his article, and translated easily enough into the “crowd intelligence” paradigm of the late 2000s, and early academic crowdsourcing. In these earlier days of Web 2.0, it was still possible to make at least a scalar distinction between producers and consumers, between the crowd and the crowdsourcer (or the outsourcer and organization outsourced to, to keep with his metaphor); even though the role of the user as a creator and a consumer of content was changing (2006 was, after all, also the year in which Facebook and Twitter launched). But how about today? This is a question raised by a recent data analysis of Brexit by the Economist. In this survey of voters’ opinions, it emerges that over 80% of Leave voters stated that they had “more faith in the wisdom of ordinary people than the opinions of experts”. I find the wording of this question fascinating, if not a little loaded – after all, is it not reasonable to place one’s faith in any kind of “wisdom” than an “opinion”? But the implicit connection between generally a generally held belief and (crowd) wisdom is antithetical to independent decision making. This is crucial to any argument that “crowd wisdom” leads to better decisions – such as leaving the EU. In his 2004 book, The Wisdom of Crowds: Why the Many Are Smarter Than The Few, James Surowiecki talks of “information cascades” being a threat to good crowd decisions. In information cascades, people rely on ungrounded opinions of others that have gone before: the more opinions, the more ongoing, self-replicating reinforcement. Surowiecki says:

Independence is important to intelligent decision making for two reasons. First, it keep (sic) the mistakes that people make from becoming correlated … [o]ne of the quickest ways to make people’s judgements systematically biased is to make them dependent on each other for information. Second, independent individuals are more likely to have new information rather than the same old data everyone is already familiar with.

According to the Economist’s data, the Brexit vote certainly has some of the characteristics of information cascade as described by Surowiecki: many of those polled who voted that way did so at least in part of their faith in the “wisdom of ordinary people”. This is the same self-replicating logic of the NERC boat naming competition which led to Boaty McBoatface; and a product of the kind of closed-loop thinking which social media represents. Five years ago, the New Scientist reported a very similar phenomenon with different kinds of hashtags – depending on the kind of community involved, some (#TeaParty in their example) develop great traction among distinct groups of mutual followers with individuals tweeting to one another, whereas others (#OccpyWallStreet in this case) attract much greater engagement from those not already engaged. It’s a pattern that comes up again and again, and surely Brexit is a harbinger of new ways in which democracy works.

It is certainly embodies and represents the information cascade as one key aspect that Surowiecki would have us believe is not the Wisdom of Crowds as a means of making “good” decisions. There may be those who say that to argue this is to argue against democracy, that there are no “good” or “bad” decisions, only “democratic” ones.  That is completely true of course; and not for a moment here do I question the democratic validity of the Brexit decision itself. I also happen to believe that millions of Leave voters are decent, intelligent, honourable people who genuinely voted for what, in their considered opinion, was the best for the country. But since the Goves of the world made a point and a virtue of placing the Leave case in opposition to the “opinions of experts”, it becomes legitimate to ask questions about the cognitive processes which result from so doing. And the contrast of this divisive rhetoric with those constructive and collaborative relationships between experts and non-experts evident from academic crowdsourcing could not be greater.

But that in turn makes one ask how useful the label “expert” really is. What, in the rhetoric of Gove, Davies etc, actually consigns any individual person to this reviled category? Is it just anyone who works in a university or other professional organization? Who is and who is not an expert is a matter of circumstance and perspective, and it shifts and changes all the time. Those academic crowdsourcing projects understand that, which is why they were so successful. If only politics could take the lesson.

 

Quantitative, Qualitative, Digital. Research Methods and DH

This summer, there was an extensive discussion on the Humanist mailing list about the form and nature of research methods in digital humanities. This matters, as it speaks in a fundamental way to a question whose very asking defines Digital Humanities as a discipline: when does the development and use of a tool become a method, or a methodology? The thoughts and responses this thread provoked is testament to the importance of this question.  While this post does not aim to offer a complete digest of this thread, I wanted to highlight a couple of key points that emerged from it. A key theme was in one exchange, which concerned the point in any research activity which employs digital tools at which human interpretation enters. Should this be the creation of tools, the design of those tools, the adding of metadata, the design of metadata, and so on. If one is creating a set of metadata records relating to a painting with reference to “Charles I” (ran an example given by Dominic Oldman), the computer would not “understand” the meaning of any information provided by the user, and any subsequent online aggregation would be similarly knowledge-agnostic.

In other words, where should human knowledge in the Digital Humanities lie? In the tool, or in the data, or both?

Whatever the answer, the key aspect is the point at which a convention in the use of a particular tool becomes a method. In a posting to the thread on 25th July, Willard McCarty stated:

The divergence is over the tendency of ‘method’ to become something fixed. (Consider, for example, “I have a method for doing that.” Contrast “What if I try doing this?”).

“Fixedness” is essential, and it implies some form of critically-grounded consensus among those using the method in question. This is perhaps easier to see in the social sciences that it is in the [Digital] humanities. For example, how would a classicist, or an historian, or a literature scholar approaching manuscripts through the method of close reading present and describe that method in the appropriate section of the paper? How would this differ from, say the equivalent section in a paper by a social scientist using grounded theory to approach a set of interviews? While there may be no differentiation in the rigour or quality of the research, but one suspects the latter would have a far greater consensus – and body of methodological literature – to draw upon to describe grounded theory, than the former would to describe close reading.

Many discussions on this subject remain content-focused still. What content means in itself has assumed a broader aspect. Whereas “content” in the DH may once have meant digitized texts, images and manuscripts, surely now it also includes web content such as tweets, transient social media, and blog posts such as this one. It is essential to continue to address the DH research life-cycle, as based on content, but I still but we need to tackle explicitly methodology (emphasis deliberate), in both its definition and epistemology, and defined by the presence of fixity, as noted by McCarty.” Methodological pluralism”, the key theme of the thread on Humanist this summer, is great, but for there to be pluralism, there must first be singularity. As noted, the social sciences have this in a very grounded way. I have always argued that the very terms “quantitative” and “qualitative” are understood, shared, written about and, ultimately, used in a much more systematic way in the social sciences than in the (digital) humanities, where they are often taken to express a simple distinction between “something than can be computed versus something that cannot”.

I am not saying this is not a useful distinction, but surely the Humanist thread shows that the DH should at least deepen the distinction to mean “something which can be understood by a computer versus something that cannot”.

I would like to pose three further questions on the topic:

1) how are “technological approaches” defined in DH – e.g. the use of a tool, the use of a suite of tools, the composite use of a generic set of digital applications?

2) what does a “technological approach” employing one or more tools enable us to do?

3) how is what we do with technology a) replicable and b) documentable?