From Gutenberg to Google: Electronic Representations of Literary Texts

Paperback
from $0.00

Author: Peter L. Shillingsburg

ISBN-10: 0521683475

ISBN-13: 9780521683470

Category: Publishing Industry - History

As technologies for electronic texts develop into ever more sophisticated engines for capturing different kinds of information, radical changes are underway in the way we write, transmit and read texts. In this thought-provoking work, Peter Shillingsburg considers the potentials and pitfalls, the enhancements and distortions, the achievements and inadequacies of electronic editions of literary texts. In tracing historical changes in the processes of composition, revision, production,...

Search in google:

Shillingsburg explores the new and future possibilities, some yet untapped, for electronic representations of printed works.

\ \ Cambridge University Press\ \ 0521864984 - FROM GUTENBERG TO GOOGLE: ELECTRONIC REPRESENTATIONS OF LITERARY TEXTS - by Peter L. Shillingsburg\ \ Excerpt\ \ \ \ \ \ \ \ \ \ Introduction\ Although this book focuses on the problems and potentials for electronic representations of the fundamental materials of document-based knowledge in literature, similar conditions obtain for representations of works in music, philosophy, history, the law, and religion. These fields find in paper documents the primary materials of their research and, as in all other fields, use documents as repositories of scholarly knowledge. It would please me if the principles emerging from this study were found applicable in these other fields as well.\ The title, From Gutenberg to Google, came to me in Mainz, Germany, at the Gutenberg Museum. As I stood looking at copies of the first book printed from moveable type 500 years ago – its beauty, its endurance – I had a vision in the form of a question: where, in 500 years, would anyone stand to look at a museum display of the first electronic book and would the words “endurance” and “beauty” come to mind? The question may have a breath-taking answer, though I do not know what it is. Endurance and beauty were, perhaps, byproducts and not the primary goal of Gutenberg’s enterprise. The future of electronic editing dawns as clearly bright to us now as the future of printing must haveappeared in the first decades following 1452 to the scribes employed on the new medium of print. Other scribes employed in scriptoria continued to produce elegant manuscripts for over 100 years. No doubt the complex and tedious new technologies – casting type, composing texts using type-sorts with reversed letter images and representing an enormous investment of tin and lead, printing at large presses resembling the tools of oil and wine manufacturing, and involving so much labor before a single inked impression appeared on paper – must have seemed excessive to many scribes who could have copied any number of beautiful pages in half the time and at a fraction of the expense it took to set up a single page for print. But when the press began to be worked, hundreds of copies materialized in less time than it took to speak the text, let alone copy it. So too, now, the vexations of electronic technology – involving interface design, the ease of error, the intricacies and mysteries of acronyms like XML1 and TEI and its DTDs, to say nothing of the real fear of early obsolescence or hard disk crashes – are fearful costs in electronic environments seemingly more adaptable to short-lived “messaging” than as a medium for the preservation of enduring works of literary art. And yet, I believe with many others that the age of print has seen its peak and heyday, and will soon be surpassed, though not replaced, by electronic texts.\ But why Gutenberg to Google instead of the equally euphonic and perhaps more expected Gutenberg to Gates? Gutenberg’s invention revolutionized textuality by making available, to a wide public, books that previously had been the purview of only the wealthy or the monastic. What Gutenberg did to democratize books and other texts, the World Wide Web has done to democratize information. And Google has become the symbol for the gateway to information on the Web; information can be found by anyone. Furthermore, Google’s resistance to the appearances of commercial intrusion in the user’s search for information has given its pages an integrity and seriousness lacking in most search engines and information sites. Finally, Google’s method of costing and financing its services through user-fees for its advertisers based on hits rather than on licenses or product sales suggests a way to structure the finances of electronic knowledge sites that is significantly different from the sale of books or subscriptions to databases.\ Yet, web browsers, regardless of the sophistication of their prioritizing processes, have no scholarly refereeing system to vouch for the quality of information and disinformation accessed in a search. Web browsers are independent of concerted efforts to develop coherent bodies of knowledge, thus a search provides at least initially a disordered array of information sites where reliable information and accurate representations of foundation documents are undistinguished, and perhaps indistinguishable, from rumors and gossip. They depend on a notional “cream rises” process that is undermined by a counter “bread and circuses” notion. The boundaries are unprotected and unmarked. The problem of reliability is crucial to the effective implementation of a democratized world of scholarship and its documentary source materials.\ This book addresses the proposition that the electronic representation of print literature to be undertaken in the twenty-first century will significantly alter what we understand textuality to be. A significant part of this book is devoted to what I call a script act theory of written language – a theory I discussed first in Resisting Texts (1997). Script act theory may be too fanciful a name for what I have attempted, and much of what I have used in formulating the theory is, of course, taken from the thinking of others. Script act theory represents an amalgamation and synthesis of previous insights and strategies for understanding written literary texts developed in separate, sometimes isolated, fields. Rather than identifying the one or two best or most complex or most simple or useful approaches to text, I attempt through script act theory to see how competing insights into the workings of written language can be arranged as a set of tools and options, each with some consequence for user-interaction with texts. I see the result of this effort as an overview of a variety of literary strategies rather than a comprehensive unified field theory of written communication.\ The impulse to provide such an overview derived in part from a curiosity about literary theorists competing to provide new reading and critical strategies, in part from a distaste for the petty disputes among textual critics and scholarly editors about which way was wrong and which right for preparing new editions, and in part – and perhaps most importantly – from a desire to understand what might be needed or what might be possible in the electronic representation of print literature that was not possible on printed paper. Again it may be fanciful to think that such electronic representations might free print works from the artificial restraints imposed on textuality by the limitations of print. But such propositions must be raised before they can be tested.\ The importance of script act theory, I believe, is that it provides a comprehensive basis for understanding what is happening when print texts are re-represented as electronic texts, particularly in ways that transcend the limitations of print or exploit capabilities unique to electronic media. If electronic representations of print literary texts achieve no more than a transfer of text from one medium to another with added ease in searching and indexing, such a comprehensive understanding of the nature of writing may not be needed. But if electronic representations actually alter the conditions of textuality, a fuller understanding of textual dynamics is necessary. As will become obvious during a reading of this book, I am concerned not only with texts and “their” textuality, but with writers and readers in a triangle of relations that together more properly constitute textuality. Thus, it follows that electronic representations of written texts have as much capacity to change the users as they have of changing the text. Computers have altered the way people interact with texts and the way they think about texts and thus have changed both textual uses and users.2 But perhaps that is just a fuller acknowledgment of what was meant at first by the question: Does electronic representation of texts change the nature of textuality?\ Electronic media appear to have freed readers and scholars – both literary and textual critics – from many of the restraints of print editions that kept books linear in spite of our efforts to make them radial and to provide random access. While many enthusiastic and some beautiful and some complex electronic projects have blazed trails into this territory,3 there has been little effective development of a theory of electronic editing to support electronic editions, archives, or teaching tools. The conceptual structures developed in this book are understood interdisciplinarily under the label script act theory. This theory draws under one umbrella much that belongs to the traditions of bibliography, textual criticism, scholarly editing, linguistics (particularly pragmatics), literary theory, cognitive science, and modern technology.\ It is very clear to nearly everyone that we are in the infancy of a textual revolution comparable to the one initiated by the invention of printing from moveable type in the fifteenth century, and our revolution is developing at a far more rapid pace. As yet we are but 15–20 years into an era whose counterpart introduced a 500-year reign. We have much to learn, and, though I have tried in a modest way to be futuristic, I have probably failed; for much of the thinking in this book is derived from other scholars, and technology already exists for much of what is described here. In a sense, the future is now.\ This book begins with two chapters offering an overview of the coming dual task: first, of continuing the age-old process, undertaken by every generation, of collecting, maintaining, and transmitting the texts of its literary cultural heritage; and, second, of developing a sufficiently complex and sufficiently standard and stable way to do that in electronic form. As a means of understanding the complexity and the opportunities represented by that second task, I elaborate, in chapter three, a script act theory – an analysis of the condition of written works that distinguishes them from speech and identifies the elements required by the conditions of reading to be addressed in representing print works electronically. Chapter four outlines a conceptual space and shape for electronic editions, or as I prefer to call them, knowledge sites. These two chapters bear the mother lode of substance in this book: its theory and practice. In chapter five I provide a specific case for a type of textual information that is especially capable of electronic representation but that has been neglected in print re-presentations of older texts because in print it was too hard to handle and because that difficulty seemed greater than the benefits of trying. I look specifically to Victorian literature and to its rapidly fading iconic, material existence as a challenge to the new media for text preservation, editing, and (re)presentation. Chapter six surveys rather critically the litter of casualty electronic editions and the false bases and limited goals that informed so many early – that is, current – efforts; and it points hopefully to the best early, though still inadequate, efforts to provide electronic texts responsibly and with added scholarly value. This chapter returns to the problems of representing Victorian fiction, begun in chapter five. Chapter seven deals with the problems arising from the fact that script act theory is still not a unified field theory of textuality and that different scholars have different views of what constitutes a work and how the concept of the work relates to the surviving textual evidence of its existence. This chapter is in some ways a reprise of chapter two, but its approach to textual scholarship will, I think, seem different in the light cast on these issues in chapters three and four. Chapter eight constitutes a reality check on electronic enthusiasm. It maps out false hopes and unrealistic goals or demands for electronic editions – demands that should be resisted. Chapter nine addresses the distinction drawn between physical documents and the works of art represented by them and the disputes over whether it is the documentary text or the aesthetic text that is the primary object of representation in editorial projects. And finally, chapter ten, entitled “Ignorance in Literary Studies,” provides a semi-philosophical analysis of the whole effort to devise a script act theory and electronic editions infrastructure – in short, a sort of disclaimer, perhaps a bit tongue in cheek.\ From Gutenberg to Google is meant to stand alone, addressing thoughtful general readers as well as professional scholars and critics. It is not intended primarily for other textual scholars. A word about what I see as the enabling contexts for this book is in order, however; for readers cannot be expected to have read deeply in all the fields brought to bear on the subject. Indeed, I have not read all the relevant books, and I doubt anyone else has either. An important part of the immediate context of this book exists in other books I have written or edited and in some that I imagine writing and editing. Works by other scholars form greater and more important enabling contexts, knowledge of which might help the reader to assess my arguments for their intended effects.\ This book could be seen as the third book of a trilogy that was not intended as such, but which seems to me to have happened accidentally. My Scholarly Editing in the Computer Age (1984, revised in 1986 and again in 1996) attempted to survey the prevailing notions about the nature of literary texts that propelled and guided scholarly editors. Its idea most relevant to the present work is that literary works are traditionally viewed from one of five rather different and mutually exclusive “orientations” which depend on how one posits authority for or ownership of the text. If the text belongs to the author, all others who affect the text must either do the author’s bidding, fulfilling authorial wishes, or be considered interferences. If, instead, one accepts that authors are not solitary geniuses but must enter social contracts with production and publishing personnel who may be seen as serving their own commercial interests and/or those of the book-buying public, one would be more likely to see the influences of such persons on the text as natural and necessary aspects of the work. If one eschews both of these views of ownership and sticks rather stubbornly to the literal fact that all that survives from authoring and production acts is the evidence of documents, one might be inclined to think of each surviving document as the repository of a version of the work regardless of the authority or agency that left its marks on the page. A person with a strong sense of the visual and material might go a step further and say that the nature of every text is to be embodied in a particular physical bibliographical form that influences every act of reading and that, hence, every copy of the work is unique, signifying its text in a way different from all other manifestations of the “same text.” Finally, there are many persons for whom none of these considerations amounts to a hill of beans because for them the work is always an aesthetic potential – to be edited, adapted, abridged, translated, or morphed into whatever the appropriating editor/reader thinks best. The history of editing, adapting, and staging of Shakespeare’s plays – undertaken in most instances by persons who consider that they are being faithful in some sense to the author – attests to these attitudes.\ In Resisting Texts: Authority and Submission in Constructions of Meaning (1997), the second book of the accidental trilogy, I attempted to survey the range of actions relating the composition, revision, production, dissemination, and reception of texts to see what effect such a survey would have on how scholarly editors and scholarly readers can or should desire scholarly editions to be produced. One of its major conclusions was that every attempt to edit a work, even when the aim of the edition was to restore earlier or more authorial or otherwise authentic readings, is not, in the end, an act of restoration but is instead a new creative act that merely adds to the accumulating stack of available editions.\ The present book is aimed at a broader audience and attempts to survey the “communicative enterprise” in a broad way that might illuminate the range of activities and goals of authors and readers and shed the light of new research onto the means by which understandings are created. The basic impulse behind this new effort is the proposition that electronic media have altered the nature of textuality – a grandiose claim with, however, some truth. My hope is that my survey will free our reading methods from some of the habits developed under the constraints of print technology and, perhaps, enrich our interactions with written texts. For the most part, however, it seems to me that this book merely brings together what readers at one time or another have always known or desired.\ What I am attempting in this book is also influenced by my interest in other projects that have not materialized but which I see as logical outcomes. One such would be a book of illustrative examples of the materials and approaches to texts that show the interpretive consequences of textual investigations into composition, revision, production, dissemination, and reception of literary texts. The present book incorporates my attempt to explore the theories and methods behind such efforts. It would be very pleasing to me to see other textual scholars focus more attention on presenting the interpretive consequences of their textual studies in literary critical essays and books.\ Another such imagined project is an anthology of poetry for use in introduction to poetry courses. It would present each poem in multiple facsimiles of manuscript and printed historical forms and provide as supporting materials a range of the “things that went without saying” for most contemporary readers but which no longer go without saying with most students. The idea would be that students could use such information to help them to imagine the empowering meaning-generating “not saids.” The experience that first led me to imagine this project was when two of my first-year students came to class one morning having read John Milton’s Sonnet XIX in which the line “Doth God exact day-labour, light denied,” which to them seemed to suggest that the speaker could only work at night. When I mentioned that the sonnet is often titled “On His Blindness” these students felt a bit foolish – unnecessarily so, had they had an anthology of the sort imagined.\ Far more important than such unrealized works are the scholarly books that have influenced my thinking and that represent the best work of textual criticism of recent times. Jerome McGann’s A Critique of Modern Textual Criticism (1983) upset the scholarly apple cart which had plodded along for years serving, primarily, the authorial orientation to texts. Not only did McGann question in provocative ways the establishment views, he suggested the importance of the social condition of texts and brought the reader into prominence as a force to reckon with. Steven Mailloux’s Interpretive Conventions: The Reader in the Study of American Fiction (Cornell University Press, 1982) had perhaps done a better job of positioning scholarly texts in relation to reader response criticism, but McGann, building on D. F. McKenzie’s The Sociology of Bibliography (British Library, 1986), has been far more influential in bringing the social and iconic dimensions of textuality into the fore of both discussion and practice of textual criticism. McGann’s Black Riders (1993) and The Textual Condition (1991), in particular, brought to our attention the interpretive importance of visual elements in literature. George Bornstein’s Material Modernism (2001), Nicholas Frankel’s Oscar Wilde’s Decorated Books (2000), James McLaverty’s Pope, Print, and Meaning (2001), and Robin Schulze’s edition of the early works of Marianne Moore (2002), have extended our knowledge of how interpretive and editorial practice can respond to these new ideas. Without exactly ignoring McGann’s ideas but building more directly on more traditional studies of composition and revision and on the genetic criticism of German and French schools of textual criticism, John Bryant’s The Fluid Text: A Theory of Revision and Editing for Book and Screen (2002) provides a re-examination of the processes of authorial revision and the processes that readers try to use in dealing with revised texts. Bryant re-works and vitalizes for textual criticism and pedagogy a concept of compositional process that has been discussed extensively in textual circles in America since the early 1980s.4 Bryant proposes editions that enable a new way of reading that focuses on texts in motion as a fact of cultural change. His view of the ever-developing text that passes from its period of authorial intention and action onto the intentions and actions of an endless series of producers and users provides a method of reading that he applies not only to books but to cities, which he also sees as fluid texts, constantly being edited by benign and violent forces as buildings are raised and razed. He suggests that citizens can “read the city” as a developing text in which the narratives of the city at any one time are seen and understood in relation to the developing versions of the city and their own life narratives.\ Equally important has been the body of thought against which much of the work mentioned in the foregoing paragraph was written; to wit, the work of R. B. McKerrow, W. W. Greg, Fredson Bowers, and G. Thomas Tanselle. These scholars and editors are frequently now dismissed in a lump, as if they were interchangeable representatives of a unified and discredited school, rather than what I believe them to be: highly individual critical thinkers with sinuous and flexible intellectual principles, malleable and adaptable to multiple textual situations. Tanselle is the only one of them who has lived and written his way through the paradigm shift affecting textual criticism in the last quarter of the twentieth century, with his annual contributions to Studies in Bibliography and two seminal books: the short and simple The Rationale of Textual Criticism and the massive collection of essays Literature and Artifacts. Greg’s, Bowers’s, and Tanselle’s writings deserve a major reprise. Additionally, there is a sense in which this book is written against David C. Greetham’s Theories of the Text, a brilliantly conceived and difficult exposé of the narrowness, biases, blind spots, partialities, and failures in the way mo dern scholarship and criticism handle textuality.\ Two other traditions in textual criticism also inform, not always from the background, the development of this book: German historical-critical editing and French genetic criticism. The former takes a comprehensive and strict approach to historical documents to generate editions from which each relevant historical text can be constructed, eschewing most intervention on the part of the editor to improve the texts. A good introduction in English to the principles of historical-critical editing is Contemporary German Editorial Theory (edited by Gabler, Bornstein, and Pierce). French genetic criticism has taken a very different approach, using manuscripts and other evidence of composition and revision to study the genetic processes as keys to interpretation. A good English introduction is found in Genetic Criticism: Texts and Avant-Textes (edited by Deppman, Ferrer, and Groden).\ The portions of this book that attempt to discuss technological developments and their potentials are indebted in significant though general ways to the work of George Landow, John Lavangino, Willard McCarty, Jerome McGann, and John Unsworth. More specifically I depend on the work of Hans Gabler, Kevin Kiernan, Paul Eggert, Phill Berrie, Graham Barwell, Chris Tiffin, Susan Hockey, Dirk Van Hulle, Edward Vanhoutte, and Wesley Raabe. Perhaps the greatest influence on the final revisions of this book, particularly on the basic concepts of chapter four, has been the weekly interaction with Peter Robinson in the autumn of 2003. His knowledge of computing, his experience as an editor, his willingness to listen to strange ideas and to put his own spin on them, and his support for my electronic projects have shaped this book more than he knows. His essay, “Where We Are with Electronic Editions and Where We Want to Be,”5 would have made a good chapter four for this book. I tried and failed to convince him to let me use it for that chapter.\ In the fields of linguistics, speech acts, communication, and cognition I am an interested amateur, no doubt. But the relevance of these fields to the dynamics of written language and the tasks of maintaining, transmitting, and editing documents leaps out from the pages of scholarship in these fields. I owe special debts to Price Caldwell, John “Haj” Ross, Quentin Skinner, John Searle, Paul Hernadi, and Oliver Sacks for stimulating my ideas, opening doors, and in some cases giving me something to rebel against.\ I am grateful to Peter Robinson, Dominico Fiormonte, Paul Eggert, Price Caldwell, Greg Hacksley, Barbara Bordalejo, Gavin Cole, Anne Shillingsburg, Linda Bree, Willard McCarty, and the anonymous readers for Cambridge University Press for making suggestions and raising objections that have led to revisions and, I hope, improvements. Not least, I thank my best critic, Miriam Shillingsburg.\ \ \ \ \ © Cambridge University Press

Preface; 1. Manuscript, book, and text in the twenty-first century; 2. Complexity, endurance, accessibility, beauty, sophistication, and scholarship; 3. Script act theory; 4. An electronic infrastructure for script acts; 5. Victorian fiction: shapes shaping reading; 6. The dank cellar of electronic texts; 7. Negotiating conflicting aims in textual scholarship; 8. Hagiolatry, cultural engineering, monument building, and other functions of scholarly editing; 9. The aesthetic object: 'the subject of our mirth'; 10. Ignorance in literary studies; Bibliography.