Dialogues abound in ebr’s July 2018 publications.
Jan Baetens’ essay “Photo Narratives and Digital Archives; or: The Film Photo Novel Lost and Found” explores the “rogue” archival practices online that can recover and aid research of “lost” media such as the film photo novel. In David Roh’s riPOSTe to Baetens, he suggests that perhaps rogue environments of archival research, which foster community and cultural dialogue, expand notions of the archive and could be called differently as respository.
Thinking about data as something collective, cultural, and community based, Brian Schram and Jennifer R. Whitson offer “Why a Humanist Ethics of Datafication Can’t Survive a Posthuman World,” a riPOSTe to an essay by digital humanities heavyweights Geoffrey Rockwell and Bettina Berendt that ebr published in December 2017, entitled “Information Wants to Be Free, or Does It?: The Ethics of Datafication.”
María Mencía, Søren Bro Pold, and Manuel Portela offer the essay “Electronic Literature Translation: Translation as Process, Experience and Mediation,” a broader approach to how we can understand translation when tasked with the work of translating electronic literature, proposing a four-dimensional approach to translation that includes the translinguistic, transcoding, transmedial, and transcreational. Finally, Jan Baetens appears again in a riPOSTe to Mencía, Pold, and Portela, calling the essay “as ground-breaking as the writings on translation by Roman Jakobson in the 1960s.”
ebr is delighted that these kinds of sustained responses are becoming a feature of our monthly publications. Look for more in the future!
Jan Baetens’ essay “Photo Narratives and Digital Archives; or: The Film Photo Novel Lost and Found” describes the late 50s “film photo novel” genre as a “lost” medium, “lost” here having a tripartite meaning: a “lost” medium refers to difficulty of availability, difficulty of access in original intermedial contexts, and difficulty of recovering the original cultural and production practices associated with the medium. While the conditions of being “lost” paint a picture of difficulty for a researcher, Baetens notes that digital archives have made the practice of researching “lost” media much easier. He differentiates the recovery work here as being—not archaeological, as many may presume, but rather—“rogue,” borrowing from Abigal De Kosnik’s description of rogue archives as the fruit of “a multititude of self-designated archivists-fans, pirates, hackers” that “have democratized cultural memory, building freely accessible online archives of whatever content they consider suitable for digital preservation.
The environment of a rogue archive, Baetens argues, is “dramatically performative and productive,” allowing for a renewed examination of the old, including old methods. Applied to a film photo novel, its rogue archive results in works being transformed in online curation as well as reception, fostering a comparative context that is beyond adaptation and that can form new communities and practices. In sum, Baetens finds that attempted recovery of the “lost” film photo novel results in an attention towards blurred adaptation and metatext: in a digital archive, curations and reactions become their own kinds of mediations that add to and build a metatext.
In David Roh’s riPOSTe to Baetens’ “Photo Narratives and Digital Archives,” Roh hones in on the mediative, community, and practice features of online archives, noting that a rogue archive, by Kosnik and Baeten’s description, is in fact not an archive in the traditional sense, but rather, “a repository designed to spur dissemination of material to spark cultural dialogue.” With the consequent qualities of “immediacy, engagement, and refractive distortion,” Baetens asks if there is a better term than “archive” to describe the kind of research and thinking environment that can result from rogue archiving.
To re-consider what we mean by “archive” when it participates in networks of agents, he asks the pertinent question of what drives them: artifact or infrastructure? This question participates in a more complex relationship between objects and the Internet as an Internet of things that is constructed through invisible processes that we may call infrastructure. Infrastructures of social media, computer infrastructure, technological infrastructure, and global-economic infrastructure mandate the software systems, material hardware, and information storage-management-access that make an archive run. In this way, infrastructures run the artifact, and the “archive” as we know must be questioned. Which archive? Whose archive? Foucault’s? Kittler’s? Derrida’s? And what of the data—from whom and what communities were they generated, to whom and where do they belong? When a discursive community online gets involved, these questions are asked not only to inquire into infrastructure, but also into ethics. Whose data is this?
To expand upon this conversation and think about data and infrastructure in another sense, Brian Schram and Jennifer R. Whitson offer a riPOSTe to Geoffrey Rockwell and Bettina Berendt’s “Information Wants to Be Free, or Does It?: The Ethics of Datafication,” in which they ask just how much infrastructure matters when it comes to datafication.
An ethics of datafication seems prevalent in the wake of privacy scandals and re-assessments (think Facebook’s privacy scandal in Spring 2018), which only draw attention to digital structures that govern politics and economics. Where Rockwell and Berendt make the observation that too much data isn’t a good thing (I recall Lev Manovich’s (2001) description of 1990s practices of “storage mania”), Schram and Whitson add that “datafication does not benefit all equally, but favors those who already hold power.” For instance, the issue of datafication is sometimes the issue of selective archival: insofar as choices are made in what to kept and what to omit, and as much as digitization aims to preserve and redirect attention to forgotten cultures and histories, archival is always already limited. For this reason, Schram and Whitson offer an interesting argument regarding an ethics of datafication oriented around care—including the possibility of digital humanists offering select access to certain digital archives and possibly avoiding datafication entirely.
One of their key arguments is that data sets conjure a narrative that is not inherent but rather determined through databases and content management that convey information in specific ways. Data has the potential to tell stories, which then influence researchers’ findings rather than the other way around. Data, Schram and Whitson thus hold, is a variable in how much agency a research subject actually has. In fact, databases cannot make information on their own, which Hayles notes in How We Think (2012) when she advocates for humanistic interpretation and which I expand in my article “On the Value of Narratives in a Reflexive Digital Humanities” (2018). It therefore becomes more necessary to understand systematic datafication for what it can and cannot do—and in parallel, what we as critical humanists can and cannot do with data.
Schram and Whitson make two points to this effect. First, they direct attention to corporate infrastructures that continue to dominate digitization and information extraction, warning of an increasing imbalance in the control of information management between these infrastructures and the general public. Second, Schram and Whitson explore how the notion of privacy as a human right slips against the decentralization of the human subject in many areas of recent critical theory.
“Electronic Literature Translation: Translation as Process, Experience and Mediation” is an exciting essay by María Mencía, Søren Bro Pold, and Manuel Portela that expands how we might think of literary translation and software culture simply by considering how electronic literature is translated. Beyond negotiating specific languages, cultures, and even media in translating literature, when the networked procedurality of electronic literature is taken into consideration, additional layers of textual production, engagement, and creativity have to be accounted for, and Mencía, Pold, and Portela outline some of the key constituents of this kind of thinking.
As translating electronic literature includes “translation between versions and layers of software,” the authors argue for a focus on programmed compositional processes (that is, behind the screen and before on-screen representation) just as much as instantiations on the screen, as well as a need to focus on interface as the vehicle of presentation. Therefore, translating e-lit becomes a question of translating code and interface—how much can remain?
Readers will find Mencía, Pold, and Portela’s breakdown of translating programmed literature fruitful for thinking and discussing translation across language, code, media, and fidelity, especially through their breakdown of translating programmed literature includes four dimensions: the translinguistic, transcoding, transmedial, and transcreational. Mencía, Pold, and Portela are particularly inspired by the notion of transcreation, noting that the potential for transcreative works to stand as autonomous allows a re-consideration of what translation can be—potentially “a creative practice-based methodology.” In terms of creative methodology, a change is software is enough to alter presentation, making strong arguments for translated electronic literature texts to stand as new texts. They explore several examples of translation as creative practice: Woetmann et al.’s The Poetry Machine (PM) (2012), María Mencía’s The Poem that Crossed the Atlantic (2016), and Luís Lucas Pereira’s Machines of Disquiet (2015).
A final section considers how electronic literature translation serves as an avenue through which to examine software culture in general—including how software translates, as they argue, systems of work into computational operations. Having outlined the complexity with which we understand the procedurality of e-lit in general, Mencía, Pold, and Portela describe e-lit translation as being as layered from the onset, and their contribution seeks to aid e-lit translators in their task of negotiating these layers with necessary consideration of programming, programmable objects, reader engagement, and literary experience.
Jan Baetens appears twice in ebr this month, the second time in his riPOSTe to Mencía, Pold, and Portela, “Creating New Constraints: Toward a Theory of Writing as Digital Translation.” Baetens begins his riPOSTe by noting that the authors’ focus is not on older topics of translation and adaptation, such as fidelity, but rather, on a source and translated text’s relations to mediality and materiality, which are often secondary in the translation question. As Mencía, Pold, and Portela demonstrate, mediality and materiality precisely cannot be ignored when what is being translated is electronic literature, as the specificity of networked procedurality adds additional consideration in terms of what is and can be translated behind as well as upon a screen.
Baeten’s response to the essay is that it is “capable of taking into account a broader set of aspects and criteria that force us to reshape our very thinking of what translation is”; his position is therefore one of celebration, arguing that Mencía, Pold, and Portela have offered something comparable to Roman Jakobson’s 1960s writings on translation, or to later conversations on intralinguistic and intermedial translation. To this effect, Baetens highlights three salient contributions of the essay: global translation as including the technological dimensions of the text, the four dimensions of translation, and a shift in focus from product to process that considers computational procedurality.
— Lai-Tze Fan
Associate Editor and Director of Communications, ebr