As of 22 April 2009 this website is 'frozen' in time — see the current IFLA websites

This old website and all of its content will stay on as archive – http://archive.ifla.org

IFLANET home - International Federation of Library 
Associations and InstitutionsAnnual 

64th IFLA Conference Logo

   64th IFLA General Conference
   August 16 - August 21, 1998


Code Number: 027-137-E
Division Number: III
Professional Group: Public Libraries
Joint Meeting with: -
Meeting Number: 137.
Simultaneous Interpretation:   Yes   /   No

Measurement and evaluation of public libraries

Beverly P. Lynch
University of California, Los Angeles


Four approaches are particularly applicable to the evaluation of public libraries. These are the objective-oriented approach, the management-oriented approach, the expertise-oriented approach, and the naturalistic and participant-oriented approach. As interest in standards for libraries declined (the expertise-oriented approach) the objective-oriented approach emerged, with an emphasis on performance measures. The objective-oriented approach, however, does not allow for institutional comparisons, nor does it allow for an assessment of quality. These issues continue and the profession must take responsibility for the development of evaluative criteria and find ways to assist libraries in applying the criteria to evaluation and decision-making.



Evaluations of libraries are inevitable and ever present. All aspects of library development are influenced by the results of evaluations. In order to design successful evaluations, it is essential that the objectives to be accomplished are known. Also, the criteria used in the evaluation must be specified and the implications of values must be explicit. In the evaluations of libraries, it is my conviction that the essential evaluative criteria should be developed by the library profession and that standards for libraries, developed by the profession and agreed to by it, should provide the basic measures for evaluation.

Of course, precise evaluations of libraries and libraries services can never be the sole basis of decision-making. In many cases, politics is involved and a highly subjective element enters in. It is the evaluation, however, based upon sound criteria and carried out systematically, that can temper the politics.

Certain questions have emerged in evaluations of libraries which have established the essential components of library standards:

All of the existing standards for libraries derive from efforts to determine why one library is more effective than another and to decide what constitutes quality and achievement in the libraries. The development of standards grew out of the interest in evaluating libraries.

Evaluation usually involves deciding on what purpose the evaluation is to serve. It requires the collecting and analyzing of data and the making of values and judgments. The data collection is the gathering of specific information related to a problem. The data are collected in order to address specific concerns of specific problems. For example, librarians want some information that will enable them to improve a specific library program. Or, there is an interest in achieving specific objectives and a plan of evaluation is required to do that.

Standards are developed by the professional community to assist in the evaluation of library programs. The quality of a library program is judged or evaluated by experts who use standards, professionally developed, and their own expertise in making their determinations.

Alternative Approaches to Evaluation

There are alternative views of evaluation and these views have influenced librarians who are writing about library evaluation, standards, and performance measures. It is important for us to acknowledge the varying viewpoints and to recognize that not all librarians agree on the approach to take in undertaking library evaluation. Four approaches are particularly applicable to the evaluation of libraries:

  1. Objective-oriented approaches.
    The emphasis in this approach is on specifying goals and objectives and determining the extent to which they have been achieved. The evaluator gathers evidence of program outcomes and compares the actual performance against the program objectives. The work on performance measures in libraries emphasizes this approach (1). Most work on performance measures in libraries stresses the need to develop performance measures within the context of strategic planning and the library's mission and its goals and objectives (2).

  2. Management-oriented approaches.
    The emphasis is on identifying and satisfying the information needs of managerial decision-makers. The evaluator provides information and alternatives to the decision-maker. This method of evaluation usually is conducted by external evaluators. Management makes known to the evaluators what they are to examine and the kinds of outcomes that could be expected.

    A related approach, which is gaining in favor in library evaluation, is the use of benchmarking to guide management decisions. In the business literature and particularly in the total quality management (TQM) literature, a benchmark means a standard of excellence against which other similar outcomes are measured or judged. A library, seeking to improve a particular service or process, will identify another institution which it decides has an exemplary service or process. It then measures its own against the exemplary one and determines the necessary changes which have to be made so as to improve its own. This use of benchmarking is essentially comparative evaluation.

  3. Expertise-oriented approaches.
    The emphasis is on the direct application of professional expertise to judge quality. The judgments are made using standards and practices accepted by the professional community. This approach has guided the development of the standards for public libraries historically and is the approach being used by most states in the U.S. (3). In the 1970s and 1980s standards, developed by and adopted by professional organizations, often were abandoned. The introduction to the 1986 IFLA Guidelines for Public Libraries , describes why IFLA abandoned the standards:

    It is the task of library authorities and their chief librarians to assess needs, determine priorities, and quantify the resources required to meet the needs of their communities. Recommendations as to desirable levels of provision, based on past experience in quite different circumstances, are bound to be unreliable and misleading (4).

    Most of the standards for libraries emphasize resources that are required in order to ensure adequate collections, services, staff, and facilities. The standards develop out of a consensus of professionals who are considered to be expert in the particular library service. The library standards have been particularly useful to those libraries just being established or those which have been inadequately funded for some time. Libraries which exceeded the statements of standards, however, often did not feel as well served. They feared that their resources might be reduced for the very reason that they exceeded the standards. Furthermore, there was a strong movement toward the individual library determining its own goals and objectives and deciding what resources were required to achieve them. That is, the objective oriented approach to evaluation became the popular method of evaluation, driving out the expertise oriented approach.

  4. Naturalistic and participant-oriented approaches.
    The emphasis is on the involvement of participants or stakeholders in determining values, criteria, needs, and data. The evaluator works with stakeholders, and facilitates and interacts with the stakeholders and their interests. This approach is guiding current research activities in the evaluation of digital library projects (5). It also has been emphasized in much of the literature on performance measurement in libraries (6). As Powell observed in his review of public library use studies and the use of performance measures, "...the movement in librarianship has been towards judging library effectiveness from the point of view of the user"(7). The variations in approach lead us to recognize that values and judgments play an important part in library evaluations. <.ol>

Current Directions

During the 1970s and 1980s interest declined in developing quantitative standards for libraries. Output measures for performance developed. As library costs rose faster than library income, librarians sought meaningful and measurable ways to show how their libraries were performing. The development of performance measures does not include indicators of what excellent service might require. Rather, the approach is that of a single library assessing its services in relation to its own goals and objectives (approach # 1 above).

Performance or output measures developed first in the public library sector. In that community there is now a recognition that performance, in order to be satisfactory, requires a certain level of resources. As King Research observed in Keys to Success, "Performance is the relationship between resources that go into the library -- the inputs -- and what the library achieves using those resources -- the outputs or outcomes" (8 ). What is emerging is the need for standards relating to resources (or inputs) that will enable appropriate levels of performance (or outputs), and there is an emerging interest in developing professional standards against which a particular library can be evaluated. There also is emerging an interest in comparative assessment and evaluation.

One approach in comparative assessment is to identify a set of institutions with which one wishes to be compared and use that set as a referent in making comparisons on various aspects of library performance. The Association of Research Libraries (ARL) is experimenting with this approach. In developing the initial set of ratios, ARL identifies three issues which must be taken into account in assessing the reliability and validity of the data: 1) consistency, that is the way data are collected from institution to institution and collected over time. There is the difficulty with definitions here; 2) the ease vs utility, that is, what is easy to gather data on may not be the most desirable variable to measure; and 3) values and meaning, these may have meaning only in the context of a local situation. ARL has been collecting statistical data from its members for many years. Thus the Association is in a good position to use statistical measures which will help measure quality and costs of library services and enable institutional comparisons.

In the development of any statement of quantitative standards, and particularly in the development of any international standards, an important consideration is the need for uniform practices in collecting statistics and the need to develop standard definitions. Standardization of statistics is essential if accurate comparisons are to be made. John Sumsion has conducted international comparisons of public libraries using published statistics of individual library authorities within each of 25 countries ( 9). While his purpose was not to make comparisons, such comparisons are inevitable. His comparison statistics are attached in Appendix A.

As Sumsion says, the data cannot be considered precise because of the different methods of collection, different definitions, and problems with incomplete datasets. It should be noted that data from different years have been used. The expenditures have been converted to British pounds throughout, using the average exchange rate for the year to which the statistics apply. The years covered range from 1992-1994. In the tables, "loans per capita data" are frequently for loans of total stock rather than books only.

Sumsion's work offers guidance in the development of comparable statistics internationally. Also the standard, ISO 2789, scheduled for revision in 1998, provides guidance. Standardization of statistics is essential if accurate comparisons are to be made and if international standards or benchmarks are developed for libraries.

Final Comment

Standards for libraries, prepared and adopted by professional librarians and library associations in countries around the world, have been successful in identifying the kinds of resources necessary to the development of libraries services ( ). As librarians met the established minimums, and as librarians in many jurisdictions began to chafe against externally established standards, standards gave way to locally determined missions, goals, and objectives, and measures of performance began to be designed. While much work on performance measures has been carried out over the past twenty-five years, these measures have not assisted libraries much in identifying measures of quality, nor have they helped in determining the kinds of resources needed by libraries today.

Evaluations of libraries and library services inevitably call for comparisons and several approaches were identified earlier. Measures of effectiveness have remained illusive. There have been efforts to use patron satisfaction as a measure of effectiveness. There are problems here too, for patrons often do not know if they were served well. Satisfaction studies note that if the patron was treated politely and cordially, the patron reported a high level of satisfaction. There are no major studies that measure satisfaction some time after the library experience.

The demand for greater accountability is growing in most organizations and institutions. Thus efforts to evaluate and measure the performance of libraries will continue. It is obvious that libraries will continue to seek better ways to evaluate and measure their performance. It is obvious that criteria will be developed and used in the evaluations. It is imperative that the profession take responsibility for the development of those criteria and assist libraries in applying the criteria to evaluation and decision-making.


  1. Measuring Quality; International Guidelines for Performance Measurement in Academic Libraries. (IFLA Publications 76). Munchen, K.G. Saur, 1996.

  2. De Prospo, Ernest R., et al. Performance Measures for Public Libraries. Chicago, American Library Association, 1973; Abbott, Christine. Performance Measurement in Library and Information Services. London, Aslib, 1994; Poll, Roswitha and P. te Boekhorst, Measuring Quality. Munchen, Saur 1996.

  3. Lynch, Beverly P., "Performance Measurement and Quality Management: The USA; a paper prepared for the International Conference on Performance Measurement and Quality Management in Public Libraries." Berlin, August 1997 (in press).

  4. IFLA Guidelines for Public Libraries. Munchen, Saur, 1986, p. 10.

  5. Van house. Nancy A. User Needs Assessment and Evaluation for the UC Berkeley Electronic Environmental Library Project: A Preliminary Report. Berkeley, CA, 1995.

  6. Powell, Ronald R. The Relationship of Library User Studies to Performance Measures: a Review of the Literature. Champaign, Ill, University of Illinois Graduate School of Library and Information Science, 1988.

  7. Powell, p. 88

  8. King Research Ltd. Keys to Success: Performance Indicators for Public Libraries; a Manual of Performance Measures and Indicators. London, HMSO, 1990, p. 2.

  9. Sumsion, John, "International Comparisons of Public Libraries," in Proceedings of the 2nd Northumbria International Conference on Performance Measurement in Libraries and Information Services. Newcastle upon Tyne, Depart. of Information and Library Management, University of Northumbria at Newcastle, 1998, pp.369-375; Hanratty, Catherine and John Sumsion. International Comparison of Public Library Statistics. Loughborough, Loughborough University, Department of Information and Library Studies, Library and Information Statistics Unit, 1996.

  10. Withers, F.N. Standards for Library Services: an International Survey. Paris, Unesco Press, 1974.