DescriptionKeynotes speakers: Antoinette Rouvroy (University of Namur), David Murakami Wood (Queen's University), Louise Amoore (Durham University), Luciana Parisi (Goldsmiths), Ramon Amaro (Goldsmiths), Rebecca Schneider (Brown University)
The promise of big data archives to reveal an objective truth spoken by data involves a hope to rectify the errors inherent in previously trusted statistical methods, such as sampling. Yet, what is omitted in the big data craze is the realization that archives are essentially structured by error as sites of both risk and promise. This assertion fundamentally challenges the dominant understanding the archival purpose, and its raison d’être as linked to reason and order. On the one hand, the archive is trusted because of its accumulation of historical truth against fictional accounts, ignorance, and oppressive manipulations. It is invoked as belonging to the realm of evidence, progress, law and order. On the other hand, the archive’s reliance on reason and order can be criticised for failing to recognize its own essential limits, instead reproducing the very myths it wishes to eradicate. This leads to questions that have been central since the early modern period: can reason, in the form of the archive, truly eradicate error and inform a more lawful, harmonious and orderly world? Or is archival reason itself a kind of dangerous error responsible for many of the suppressive and catastrophic effects of a technologically and socially advanced civilization?
By looking at how big data archives and their key actors conceive of error and its relationship to knowledge, it becomes obvious how complex, how ambivalent, the idea of archival reason is today. This is because, far from hermetically sealing out errors, big data archives are still – similar to their analogue counterparts – riddled with uncertainties. In fact, most archival epistemologies are formed out of fear of these uncertainties: fear of the several kinds of errors that can corrupt, undermine and impede archival knowledge. And most archival agents are aware that truth has straitened circumstances in archives, and errors, faults and mistakes abound. While these uncertainties represent a risk for those relying on the accuracy of the archive, they also make it possible to engage critically and productively with big data archives as political sites of information distribution rather than objective statements of truth. In fact, archival errors even represent subversive potential that can be critically used against the archival order by dragging the archival apparatus onto the stage and showing that alternatives are embedded in the dominant – with the result, ultimately, that power is never total, consistent or omnipotent. Such a subversive approach to error thus recasts the potential for errors in the archive as a world of possibilities.
With this workshop we wish to understand error and its place in the epistemological system of big data archives. We want to ask: what constitutes an error? Which errors are tolerated in big data archives? And which are most feared? What would a classification of big data archival errors look like? What models of archival error are at work in big data archives? What are their temporalities? What fabulations do archival errors give rise to (including quasi-living forms such as glitches and vira)? And to what ends are they employed as they are inscribed in various strategic programmes such as neoliberalism (which dictates the success of few and failures of many) and its subversive counter replies (employing failure to produce critique)?
|Période||14 nov. 2016|
|Conservé à||University of Copenhagen, Danemark|
|Niveau de reconnaissance||International|