The impact of information technology on the bibliography of early printed books is a topic that must be understood in a context that appears to be a rather limited subject. Let me remind you first of the definition of "bibliography". According to the Oxford English Dictionary it is "the systematic description and history of books, their authorship, printing, publication, editions, etc". It is a subject fit in one sense only for a full day workshop. And yet it is, curiously, one which can be covered rather succinctly. For much of the impact is in the future. There are many exciting prospects. But few are functional, effective, efficient, economical. I shall, therefore, opt for the succinct approach, describing in greater detail what is actually operational, in summary what is only experimental or proposed.
Bibliography covers a number of topics related to books. Traditionally we think of a bibliography as an annotated, focused list of titles, a descriptive bibliography, as opposed to a catalog which provides only the bibliographic record and not the annotations. But then there are annotated catalogs as well. Perhaps it is better to think of a bibliography as subject-oriented, a catalog as institutionally-oriented. If that list also provides holdings data for a number of institutions it then becomes a union catalog. Traditionally bibliographies were prepared manually, usually by a specialist in the field, and represent a massive commitment of labor. The task involved developing the list of items to be included, inspecting them to describe them, creating a hierarchy or ordering to enter them in a printed list, and finally to list one or more locations of each item to provide access.
Much of the work of the bibliographer was spent in combing printed catalogs and visiting libraries to consult manual catalogs to determine what was relevant to be included. It is this part which has been most impacted by information technology. For the catalogs of libraries are increasingly automated, the online public catalogs or OPACs are often accessible on the internet, and the data contained therein is increasingly being subsumed in one of the major bibliographical utilities that provides a single search point for the holdings of multiple libraries. How successful is this new electronic access in forming the contents of a bibliography?
The online catalog, whether institutional or union, is the area of library automation that has been the earliest to be developed and the most far advanced. It has many ramifications and consequently is itself more than sufficient to occupy my presentation. I begin with the creation of the machine-readable record itself and its impact on bibliography. Let me note first that the MARC record was created as a generic, general purpose instrument. Moreover, the immediate and continuing, primary use of the record has been for current publications, current acquisitions of libraries. We all recognize that the overwhelming demand in libraries is for current publications. This is particularly the case in the sciences, far more so than in the humanities.
To begin with, the fields defined and described in the basic MARC record and most major cataloging systems are for twentieth century publications. In North America this was recognized as a major limitation for books from the hand press era. It was remedied by two successive publications, jointly sponsored by the American Library Association (ALA) and the Library of Congress (LC). The most recent edition is entitled Descriptive Cataloging of Rare Books (DCRB). But DCRB addresses omissions in the cataloging rules deemed essential to describe adequately hand press and rare publications. The MARC record itself lacked the fields demanded by rare book curators. This has been remedied to some extent both in North America, in France, in Italy and elsewhere in the Western World by the creation of new fields that address these omissions. They flesh out the basic MARC record. This subject has also received attention in IFLA. To some extent these fields of special concern to early printed books were addressed in UNIMARC. But there have been subsequent proposals to enlarge UNIMARC to make it more responsive to rare books.
A second limitation is again related to the focus on current publication. When machine-readable cataloging was first introduced it was applied first of all to current acquisitions. The next phase, retrospective conversion worked backwards from more recent to older publications. The emphasis was always on the most used part of the collection, and the general collection, rather than more esoteric collections. And we must remember that one of the great selling points of the new bibliographic utilities was the employment of shared cataloging. This was particularly important to smaller libraries, and public libraries, as opposed to academic libraries. And in the case of Online Computer Library Center (OCLC), the largest bibliographical utility in the world, the overwhelming use, at least in terms of percentage of libraries, if not searches, is by public and smaller libraries. They input new, high use titles, and retrieve the same. The Research Library Information Network (RLIN) is a somewhat different case. But both share Library of Congress cataloging which is essentially of new books.
As a consequence early printed books or rare books were almost always the last to be done as they were sequestered in Departments of Special Collections. And this parsimony with resources, insofar as Special Collections was concerned, was evident in other ways. Whereas the manual cataloging for general collections was traditionally current, the same was not true of Special Collections. In many libraries the cataloging backlog in rare or early printed books departments was woefully large. Thus when retrospective conversion took place, there were no records to convert! Either way Special Collections lost out. The basic stock of older books was the first part of the collection to be acquired, at least in libraries whose foundation went back to the nineteenth century or earlier. Cataloging rules were minimal, consistency almost unknown, and the record itself as brief as possible. Consequently when the records were converted the amount of information they contained was minimal. This was the major problem in major retrospective projects at Bodley, the British Library, or the large-scale efforts taking place now in Germany or France. Or for newer collections there was no cataloging at all!
This focus upon current publications is also reflected in the search engines of the major bibliographic utilities. Only a limited number of fields are searchable. Consequently it is very difficult, sometimes impossible, to isolate the title or clusters of what one is seeking. There is no searching by date or place or imprint name. To be sure some of this is explainable by the cost of indexing more fields in the record and the CPU time involved in running searches. But it is a major limitation. This lack of early printed books-related functionality is not limited to the utilities. It is also true for most local systems. Many of the fields and subfields dear to rare materials catalogers are not indexed, and/or do not display. Only in specialized files like the English Short Title Catalog is this remedied. And that further flexibility is more commonly found in CD-ROM data bases than in on line files. And yet the bibliographer needs precisely this kind of detail and the ability to isolate and extract all material relevant to his subject by searching on a variety of fields, often very focused.
These, then, are the limitations that often affect the accessibility or quality of machine-readable catalogs of older books. How then, are they accessed? They can be accessed in one or more of three ways. In North America they are most commonly accessed through the major bibliographic utilities, OCLC or RLIN, in which the records have been created or loaded. This provides the widest access. Whereas the utilities were first consulted primarily by librarians, new user-friendly search mechanisms, First Search for OCLC, Eureka for RLIN, have made them easily accessible to individuals. And most academic libraries, if not public libraries, permit their patrons to search these databases directly. The size of these data bases is enormous, tens of millions of records. And though the percentage of early printed books is small the number is still significant. A second mechanism is the specialist databases - e.g., the Short Title Catalog Netherlands. Some of these databases are available alternatively on CD-ROM - e.g., Wing - or both, like the English Short Title Catalog. And finally, the miracle of the internet now provides direct access to the online public catalogs of many libraries. From his office the scholar can now search the libraries of the world for materials. It has been a remarkable and rapid transformation.
The impact of electronic catalogs upon the bibliography of early printed books has been immediate and long-lasting. I want to address it from two aspects - that of the reader and that of the curator. For the reader the advent of electronic catalogs has revolutionized the way in which scholars worked. In the days of manual catalogs, much of the scholar's time was spent in searching for materials. Now with the records of the world's great libraries at his fingertips the scholar can much more easily find out what exists and where it survives. For the individual bibliographer this is the most dramatic change. Scholar after scholar has searched the ESTC to form the basis of a bibliography. We have even downloaded relevant entries on to tape or disc and sent them to the scholar to utilize. I recall one scholar who studied epigrams. He made the usual round of libraries searching for works that collected them. Then he learned of the ESTC. In one search he increased his total number of entries by twenty-five per cent. Moreover, he found one work, consisting of nothing but epigrams, to be on his own home library's shelves. He was unaware of the existence of this title. Another scholar working on geographies with maps found searching the ESTC similarly fruitful. Indeed the scholar that does not search the ESTC omits this step at his peril. I received a volume from the Royal Historical Society a few years ago that contained a compilation of charges to grand juries published in the eighteenth century in England and its dependencies. His search of repositories, including visits to county record offices, yielded about sixty, the contents of this volume. I checked the ESTC and with two searches was able to more than double this number! I even published an article providing a comprehensive list as an addendum to the work in question.
We rely, in turn, upon bibliographers and bibliographies to refine our own work. We cite them in a notes field as a means of distinguishing a particular issue or variant, referring the reader to the bibliography for fuller details. We use bibliographies to locate copies of unique items that have escaped our canvass. We send lists of our records to bibliographers to edit, correcting or amplifying our notes. It is a reciprocal arrangement that works to everyone's benefit. There are some scholars who maintain a regular correspondence with us, calling attention to institutions we should survey, correcting entries, etc.
To be sure, when the data is not centralized, when it must be searched out in individual OPACs as well as union catalogs and bibliographic utilities, it still takes a commitment of time. Moreover, many catalogs are either inaccessible through the internet or may be accessed only by passwords. And initially, the difference in software employed in the individual catalogs often made searching difficult. But the varieties of software, often incompatible, is being overcome with Z39.50 terminal protocols. This protocol enables a variety of catalogs to be searched without knowing anything about their search engines. One only has to know the search engine of the local system through which one enters it. When I consult RLIN I use one set of commands. If I access it through the University of California's union catalog, Melvyl, all I have to know is Melvyl (local system) functionality. But this introduces a new set of limitations because Melyvl, like most local systems, has limited search capability. One must match the best engine with the database to derive maximum benefits. But Z39.50 technology is a remarkable advance. To give just one example, one may compare the web site by the Library of Congress dedicated to searching library catalogs using Z39.50 - see http://lcweb.loc.gov/z3950/gateway.html#other with the BL's Gabriel - at http://icarus.bl.uk/gabriel/en/welcome.html - which does not use Z39.50, thus necessitating the knowledge of each system one wishes to search.
Local and regional particularism, so commonly found in many countries, is gradually giving way to collective efforts. The cost to go it alone is too great. The ideal is, of course, an international not a national union catalog. And OCLC and RLIN do cross national and continental boundaries. Specialist catalogs like the ESTC are international in scope. And, most promising for the hand press era, the development of the Consortium for European Libraries and its union catalog holds out great promise for the development of a single source for both bibliographic records and holdings for early printed books. But a union catalog raises another kind of problem. I have addressed the question of format of electronic records briefly. But now I must turn to cataloging rules. It is the marriage of the two, formats and rules, that produces the catalog and bibliography.
Specialist catalogs tend to be detailed. The idiosyncracies of hand printing produced so many variants or issues that the record must be sufficiently detailed or flexible to note the variations and define issues and editions. But though the MARC record would appear to make cataloging a very objective process it is in fact a very subjective one. What form should the main entry take? Should it be full or abbreviated? If the latter, how are the ellipses determined? How are diphthongs and archaic letters transcribed? What spelling of the name goes in the heading? How many alternatives are entered in the cross references? These are vital questions for electronic searching, because the search engine can not cope with discrepancies. If all the forms of the name are not entered some place in the record the desired items will not be retrieved. The rigidity of electronic records is a major problem.
Universal access was formerly available only for a limited number of libraries with widely distributed copies of their printed catalogs. The British Library and the Bibliotheque National come immediately to mind. And usage of rare material was consequently inordinately heavy at both institutions. So too, scholars working in Renaissance English literature would naturally gravitate to the Huntington or the Folger, even though only the latter had a published catalog. Incunnables through the Gesamt Katalog der Wiegendruck, Goff, or individual library catalogs, were generally reported and known. But the inclusion of a library's holdings in the ESTC, for example, has greatly increased usage. Libraries are receiving multiple queries about their collections each day when the same number of queries received now in a day would have exceeded those received in a week or a month previously.
At the Center for Bibliographical Studies, which I direct, we have found this true for both the large-scale union catalogs in which we are engaged. For in addition to the ESTC we also manage the California Newspaper Project, part of the United States Newspaper Project, funded by the National Endowment for the Humanities and administered by the Library of Congress. We create the records in CONSER, the national serials database maintained by OCLC, and enter the holdings in the OCLC serials database. We also record them in an in-house file. Librarian after librarian tells us of dramatic increases in the number of calls to inspect the material we have cataloged. Libraries which might have received a few calls in a month are now receiving two to five calls a day. The curator of printed books at the Henry E. Huntington Library reports the same phenomenon for pre-1800 English titles recorded in the ESTC.
This greatly increased usage impacts libraries in several ways. It requires more staff support, more reference staff, and reading room staff, pages, all those who handle books or respond to calls for their use. There is more pressure to catalog uncataloged material now that scholars are aware that a library may contain matter of interest to his or her project. But this increased usage also is a benefit. It is a statistic that can go to the director and provides a justification for more resources. It also directs readers to alternative institutions instead of the traditional primary locations - the Folger, the Huntington, in the U.S., Bodley, the British Library in England. This brings me to another benefit: acquisitions.
When libraries do not have full control over their own collections, and are not aware of the holdings of their neighbors, they cannot buy as wisely or selectively. They may, unwittingly, buy something they already own but have not cataloged. Or they may buy something held by a neighboring library, when limitations on acquisitions funds suggest they should buy an item not locally available. But with online catalogs, with union catalogs and bibliographic utilities, this is all changing. The curator can determine, first, if the item is locally or regionally available. If it is , he or she can use limited acquisitions funds for items that are not otherwise available. Its use in acquisitions is a major benefit of information technology for custodians. In the past the curator would have gone to specialist printed bibliographies for this information. But of course it was always out of date, even by the time it was published. An online catalog is as up-to-date as the last entry. The bibliography is superseded. But not entirely. We try to provide citations to standard bibliographies in the record. You can even search on such entries in the ESTC. So one could begin with a printed bibliography but then check the ESTC to see if there is more recent information.
It works also in other ways. Many dealers now have lists of items in stock accessible through the internet. A curator can search the internet to see if a particular item is available or, if available in multiple copies, which is the best priced. If the item is unique, or not available in the region, this gives it a higher priority for acquisition. On the other hand, this access benefits the bookseller as well. We are all - both collectors and curators - aware of the importance of the note: only one copy noted in STC; no copy in North America, etc. The price increases inversely according to the number of recorded copies in standard bibliographies. The listing, the rarity, increase the value for both the seller and the buyer. Still, it is of particularly value to curators to know where copies may be found.
There is another aspect to acquisitions. Not only does access to reliable online union catalogs tell us about the availability of the item, it also aids in identifying the edition. One curator has told me that this is especially valuable to him for modern books. When he is studying the acquisition of a particular item, he can often glean essential information new to him from the internet. Was it issued with a dust wrapper, a frontispiece? Is there an earlier edition preceding the official first edition? Are there states, issues, that affect the value of the item under consideration? To be sure, much information of this kind can often be found in published bibliographies. But the large online catalogs have such a wealth of current information gleaned from so many sources that they update significantly traditional, older, printed sources.
The transformation in access has its down side. As well as inaugurating what may be a flood of readers, it also alerts a less welcome class of visitors, book thieves. The inclusion of a library's holdings in a union catalog makes it possible for reader and thief alike to identify what is unique or rare at a particular institution. This new accessibility is, in fact regarded as a mixed blessing by some librarians. We found this to be especially true when we canvassed the Oxford college libraries for the ESTC - first for their eighteenth century holdings, then at selected institutions, for their seventeenth century holdings. These libraries have small staffs. The fellow-librarian has usually only supervisory responsibilities and is rarely present. An assistant librarian may be the only professional, and perhaps the only full-time worker.
Let me give one specific example, Worcester College. Worcester has the finest collection of English Civil War and Interregnum tracts in existence outside the Thomason Collection at the British Library. At least ten per cent of its rich holdings are found no where else. It is particularly rich in provincial and Scottish imprints, because George Clark, who formed it, traveled with the army and resided for some years in Scotland as secretary to General Monck. When we negotiated for permission to inventory the collection, taking copies of all the title pages, there was great reluctance on the part of the then librarian. She refused to let the shelfmarks be entered in the ESTC for fear they would lead thieves to the items themselves. Eventually we were able to persuade her to allow us to make the canvass. In order to create a full AACR2 record for each item we had to have full title page and physical description information. Our canvass was very thorough. We found many more items than those identified by the Wing team. Many were items in locations not known to the librarian herself. They are all entered in the ESTC. The tragedy is that if a scholar, finding the entry in the ESTC, wishes to see the item itself, he may be frustrated. For the only record of its existence is in our database - and the location information has been deliberately wittheld and may not be retrievable!
My comments so far have focused on catalogs and access, the most important area impacted by information technology. But there are other aspects, related both to access or transmission of text and preservation. This is an area of very great importance to the bibliographer. To describe a copy he must inspect it, note binding, endnotes, pagination, check for features that mark a specific issue - a broken letter, a catchword, etc. The small details that identify an issue or variant may ultimately find their way into an annotated catalog or bibliography, but they are usually not present in the institution's bibliographic record for that time. Because of the demand to inspect and study rare texts, those of the greatest importance are put at great risk, of deterioration and even destruction. I am told, the story may be apocryphal, but nevertheless instructive, that some very rare early English texts have literally been read to pieces at the British Library. Access to unique items must be carefully limited..
The only real alternative up to now has been photographic reproduction - essentially microfilm. It is the only accepted medium with long term stability. But it has severe limitations. It is awkward to use, often difficult to read, requires cumbersome apparatus, is easily scratched and is generally in black and white. Many of the special features of texts - especially illustrated texts - are lost or overlooked. Yet until now there has been no satisfactory alternative. In the United States the National Endowment for the Humanities launched a massive brittle books program, one developed by the research libraries themselves, because the rate of self-destruction, only accelerated by heavy usage, was so great. But now there are electronic alternatives: digitized images and digitized texts.
At this stage the digitized images are the most successful. They can reproduce objects with breathtaking clarity, color with astonishing faithfulness. The definition and detail are such that they permit almost the same kind of access as the item itself. Digital images can even go beyond the original to provide an enhanced access not obtainable from the original item. The most striking example is the digitization of the unique manuscript of Beowolf at the British Library. This is one of the great literary treasures of the English-speaking world, indeed of the whole family of Germanic basic languages. But the kind of photography employed to capture the image of the manuscript gave a new insight into its composition and content. It revealed erasures and overwriting not detectable to the naked eye. Beowolf was one of the manuscripts in the Cottonian Library damaged in the fire of 1731. Fortunately it was saved, but not without some loss of texts. But the extraordinary skill of the photographers and the exceptional qualities of the filming process enabled writing to be recovered from the scorched pieces, text heretofore only known from transcripts made before the fire.
The Beowolf reproduction is photography-based. The photographs, not the manuscript itself, were digitized. But scanners permit direct digitization. Scanners have had severe limitations. They are flat bed instruments and reproduce in a distorted manner codices that cannot be disbound. Text in the inner margins is often difficult to read or even unintelligible. Yet there is a natural reluctance to disbind original bindings, particularly when they are original and well preserved. A new development is the overhead scanner which can be trained to compensate for the curves. Each year such equipment becomes more sophisticated, more accurate. My colleagues and I had a demonstration of the new Minolta overhead scanner last year that we found very impressive because of its ability to compensate for the curves of an open book.
There are two major disadvantages to the use of digitized images for the preservation and dissemination of images. The electronic medium is essentially unstable. There is not that degree of stability or durability that justifies extensive investment in digitized images as a preservation medium. This is the general consensus of the research community and, most importantly, the funding sources. Although the National Endowment for the Humanities is a major player in funding preservation projects in the United States, and has funded a number of development projects involving digitization, it still regards microfilming as the preferred medium.
The second disadvantage is cost. Digitized images as opposed to texts, and even more, color as opposed to black and white, make it currently too costly for large-scale projects. Digitization does attract a lot of attention. There are numerous small-scale projects, especially educational ones, that are seductive and appealing. The Librarian of Congress has jumped on the digital bandwagon with the National Digital Library. He has persuaded Congress to invest in it. The digitization craze in many forms is sweeping the world and certainly the United States. At the University of California its proponents have persuaded the President that this is the future of information preservation and dissemination. Funding for libraries, collections and print-based products are suffering as a consequence. But the cost of converting all the information currently preserved in print-based products is so astronomical that it is ridiculous to even contemplate it. Selective conversion is a reality. And as the technology improves and the cost drops, more and more texts will become available in digitized versions. But cost must be justified by demand, and for many earlier materials the demand is so specialized that the conversion cannot justified. And yet one cannot predict the future.
The rate of technological change seems to be ever increasing. I was told just this month of new developments that permits whole feature-length motion pictures to be stored on a conventional CD-ROM. This is a vast expansion in storage capacity. And equally great expansions are predicted for the next decade. So it is only a matter of time, but always of cost. Some say CD ROM is already on the way out, to be replaced by DVD. With the internet, which eliminates long distance charges, every source is equally accessible to every user. And as digitized texts and images are being created and mounted on databases accessible through the internet the cost factor is essentially eliminated except for the institution which archives the data. And in spite of the current cost much is being attempted. A consortium of presses is producing out-of-print books on demand, a variation on the brittle books project, by converting the originals into a digital format and printing a new version on alkaline paper. Note that the paper product is still the preferred format! Under a Mellon-funded initiative a range of periodical titles in the humanities is being scanned and made available free of charge in electronic format over the internet. Libraries throughout the world are digitizing some of their treasures and making them available. But there is no central source, no international bibliographic utility to access to find a specific item.
The prospect is a depressing one for scholars. Digital information will be focused on current publications. It is far easier to preserve and distribute digitized texts that are created from electronic texts and then converted to print than to reformat printed texts into a machine-readable equivalent. But the expense, at least for the present, is so great that other components of our libraries will be the sufferers. And early printed books will be at the bottom of the totem pole just as they have been for machine-readable cataloging and retroconversion. There will be exceptions, projects with media appeal, and so special that the extra cost is justified. Beowolf is one such example. Incipit, the mating of a union catalog of Incunnables with images of every page that is bibliographically or typographically significant, to allow for the precise identification of specific editions, is another example. Wouldn't we love to do that for the ESTC! But for 412,000 titles, soon to be 450,000-475,000, where the material to be reproduced is widely scattered over many continents it is not yet practical. To set up the camera in every location where there are unique items would be prohibitive.
Digitizing images is perhaps most relevant to early printed books, because it preserves the appearance and objects and the relationship of components of a page to each other. Indeed this may be the ultimate basis for true bibliography. For, after all, a descriptive bibliography is only a way of translating into words a visual image. Increasingly specialist bibliographies will be that text mated to a digitized reproduction of the item itself, or parts of it, enough to establish the unique character of the item at hand. But for the readers of texts there is another dimension that is of equal if not greater interest. I speak now of digitized texts themselves, not just the images of texts. This too is a medium that has attracted great interest. It is far cheaper to store digitized texts, given current storage media. And digitized texts offer one very Special feature that makes them avidly sought by scholars. They can be searched and indexed electronically. To provide a wide range of texts in electronic format has the potential to once again revolutionize scholarship. But here, too, technology does not match demand. As I have said already, in relation to digitized images, the current cost is so great, the sheer extent of potential texts to digitize so great, that there is no way that the great repositories of printed works in the world of learning can be converted to digital format in a conceivable future.
Digitizing texts depends upon two components - hardware and software. The hardware has evolved fast and well. I recall when I purchased a Kurzweil scanner for my college fifteen years ago. I purchased the first in the State of Louisiana and set up a text processing center for my faculty. When I first looked at the machines they cost $150,000. The cost came down after a few years and we purchased one for $50,000. We spent another $30,000 or $40,000 for a tape drive to link to it. Now for a few hundred dollars desk top scanners that are far more efficient and reliable can be had, and the data can be stored on tape or hard disk at a negligible cost. But it is a combination of hardware and software that is required to complete the job. The sophistication of the software available is also improving exponentially, when considered both in terms of speed and accuracy. But the typeface has to be very regular. The error rate can be so high that the cost of editing and correcting it quickly makes it uneconomical. It is instructive to note that most of the large scale conversions of printed text in recent years have been made by keyboarding rather than scanning. The digitization of the British Library's General Catalog is one example of many similar ones. The retroconversion of printed card catalogs is also achieved by the same means - matching to existing records, keying of new records. Chadwyck-Healey's impressive English Poetry Database was also created by keyboarding, not scanning. But for a bibliographer a digitized text, as opposed to a digitized image, is of little value. It bears no physical relationship to the object itself. It is not a means of identifying it. It is not a substitute.
There are many notable experimental projects being conducted all over the Western World, a newspaper in Jerusalem, printed books in Europe and North America. The most tempting prospect is the digitizing of texts from microfilm. Once the hardware and software can do so with the necessary accuracy the impact can be tremendous. I am particularly interested in the prospect of digitizing newspaper texts from microfilm. Such a great number have been transferred to this medium. Microfilming is an urgent necessity for post 1875 issues because of the employment of bleached paper that is rapidly disintegrating, accelerated by poor storage, exposure to light, and overuse. This is a legitimate concern for Special Collections where this material is often housed. But the technology does not yet meet the challenge for digitized texts. Damaged originals, print through of ink, yellowing paper which reduces the contrast between text and background, all militate against accurate scanning.
For students of early printed books this is especially disheartening when one considers handset type. Scanners require a high degree of consistency in the evenness of the light, color and contrast, font styles and size. At present handset type defies scanning with any high degree of accuracy. Readers of newspapers, especially those produced from hand-set type are, doomed to wait until the technology can meet the challenge. This could be within this decade or could be decades away. The readers must continue to consult the originals or photo-reproductions, a tiresome, laborious process, and one that has an inevitable wear impact upon the materials being used.
The Research Libraries Group (RLG), ever an innovator, is investigating ways to develop a stable intelligible infrastructure within which preservation digitization can take place. Five working groups are addressing such issues as digital archiving, preservation and reformatting information, digital image capture, preservation meta data, and preserving magnetic media. And ultimately it is this combined kind of approach that will produce the products we need and want. In like fashion the various media in which information is preserved and the tools used to access them must be available at a single entry point to be able to serve the greatest range of readers. In a related endeavor RLG is constructing a new information management system, entitled Arches Infrastructure. Arches Infrastructure will enable user and document authentication, version control, compensation to rights holders, efficient management of storage media, and the refreshment and migration of information and will address the impermanence of URLs. Researchers will be able to launch searches within the Arches framework that seamlessly access MARC databases, SGML-encoded information, full-text databases, and image-only documents.
How then should I conclude, to summarize what I have suggested in response to the topic which was set for me? The impact of information technology upon the bibliography of early printed books is undeniable as it is upon any component of the library or the information industry. But at the time at which I speak the impact is mainly found in providing access, and it has been later in coming than we all had hoped. Moreover, the cost of equipment to take advantage of IT is constantly growing, as each new innovation can require the replacement of the previous generation of equipment. Information Technology has enormous potential for preservation and the dissemination of texts in a distributed environment. But that is in the future. Whether near or far I do not have the temerity to pronounce.