top of page
Hana Metzger
Hana Metzger's e-Portfolio
Competency N: Evaluation of Services
Section 1
Evaluate programs and services using measurable criteria.
Section 1A: Competency Description and Scope
Information professionals must evaluate programs and services in order to measure their success and make improvements where needed. Here I will examine how to establish evaluation criteria, how to report on results, and how to use the results of an evaluation.
Establishing Criteria and Methodology
Evaluation criteria offer a way of measuring the success of library programs and services. The criteria for evaluating library programs and services should be established before they occur. Matthews (2018) identifies six categories of questions to help librarians form evaluation criteria: how, who, what, when, and why (pp. 257–258).
How questions cover such terrain as the number of attendees at a program ("how many"), the effectiveness of a program ("how well"), and customer satisfaction (Matthews, 2018, p. 258). These types of questions will need to be answered after a program or service by library staff, users, or both. For example, both librarians and patrons may have a sense of how valuable a program was, and it is useful to have both perspectives.
The why and who questions should be answered before any program begins or service is embarked upon. Determining why an evaluation is being conducted will help librarians to determine what needs to be evaluated (Matthews, 2018, p. 258). For example, evaluations may be needed to help librarians determine why certain story times are more popular than others. Librarians must also know who is performing an evaluation, and who the evaluation is being performed for (Matthews, 2018, pp. 358-359). Selecting and training staff to gather data in advance ensures that data will be collected accurately.
The question of what refers to the evaluation methodology, which may be quantitative or qualitative (Matthews, 2018, p. 259). Quantitative evaluation methods include counting, measuring, surveys, and statistical analysis, whereas qualitative evaluation methods include observations, interviews, focus groups, and concept mapping (Matthews, 2018, p. 259).
Finally, questions of when and where indicate the time and place of evaluation (Matthews, 2018, p. 260). In these case of a program, these may be determined by the program's location and time. However, to evaluate an ongoing service, such a reference, a librarian will need to choose the most effective time and place for evaluation. Options include interviewing patrons who are entering or exiting the library, online surveys, and counting the number of people who use a particular service (Matthews, 2018, p. 260).
Sometimes, it makes the most sense to use already-established criteria to evaluate a program or service. The Reference and User Services Association (RUSA), a division of the American Library Association, has published a standardized metric for evaluating reference services in the guide Measuring and Assessing Reference Services and Resources. In its guidelines, RUSA (1995) divides reference transactions and services into two categories with smaller subcategories: the first category, reference transactions, are measured in volume, cost, benefits, and quality; and the second category, reference service and program effectiveness, are measured by performing cost/benefits analyses and through quality analyses of patron needs and satisfaction.
Section 1B: Importance to the Profession
Librarians must know how to evaluate programs and act on the data that results from evaluation. Evaluating programs and services allows libraries to improve. Goldman (2021) states that institutions perform evaluations "to create institutional change, to demonstrate the importance of specific programs or initiatives to funders and sometimes to simply demonstrate their own impact to the outside world" (p. 2). Evaluation can support library programs by helping librarians to better understand their patrons and allocate resources appropriately (Goldman, 2021, p. 2).
Moreover, there can be negative consequences to ignoring or skipping self-evaluations. Matthews (2018) writes that libraries that do not rely on data to improve services run the risk of marginalization (p. 262). Without evaluating and improving programs and services, libraries may become stuck in tradition while other organizations improve and innovate. Soon, the community turns to these other organizations instead of the library for better services, and the library becomes a marginalized institution in people's lives.
Evaluation reports also help stakeholders and grant funders to understand the success of programs and services (Matthews, 2018, p. 262). Goldman (2021) states that key hallmarks of a good evaluation report are telling a story, organizing results by finding (and not by survey question), and being honest about what the data can or cannot tell you (p. 15). Sharing evaluation results with stakeholders is important because it can convince stakeholders to increase funding or provide further support for programs and services. Writing an evaluation report can convince community members that the library is relevant (Goldman, p. 2). Evaluations can help libraries to find cost-saving methods and to justify spending (Matthews, 2018, p. 257).
Section 2
Here I will provide three evidentiary items for Competency N.
Section 2A: Preparation
To prepare for this competency, I took classes that taught me to collect and analyze data. In INFO 202: Information Retrieval System Design with Professor Alison Johnson, I learned the principles of user-centered design and how to create measurable criteria for assessment purposes. This class was especially focused on web-based services, but the principles that it taught me can be applied to in-person services as well. In INFO 204: Information Professions with Dr. Debra Hicks, I learned how to perform a SWOT analysis to understand a library's strengths, weaknesses, opportunities, and threats. This is a more general analysis than one of a single service or program, but it does require finding measurable criteria to analyze.
In INFO 210: Reference Services with Dr. Jose Aguiñaga, I learned about standardized criteria for assessing reference services, including the RUSA guidelines that I discussed above. I then had a chance to analyze myself using RUSA standards and to receive a (non-RUSA) supervisor evaluation during the internship I had with the San Francisco Public Library Jail and Reentry Services.
Another class, INFO 260A with Professor Peck, taught me how to design programs with evaluations built into them. In this class, I attended two events, a public library storytime and a magic show, and then evaluated them using established criteria. I also created my own programs with criteria for evaluation.
Section 2B: Evidence
Evidentiary Object 1: INFO 202 Website Redesign
This is a group project that I worked on with four other students for INFO 202: Information Retrieval Systems Design with Professor Alison Johnson. For this assignment, our group evaluated the website of the San José State University School of Information and proposed a substantive redesign that focused on improving accessibility and user experience. Our group worked on this together, but some of the relevant portions that I worked on in particular were: creating criteria for the evaluation process, participating in the survey, and writing the executive report and evaluation sections.
This assignment demonstrates my ability to create measurable criteria for a library service (in this case, a website). Our group administered a user survey to determine which iSchool webpages were most often used by students. We then ranked our data from most used to least. Using this criterion and evaluation methodology shows that I am able to ask how, who, what, when, where, and why in order to form relevant criteria. For example, we determined that we would approach the website evaluation as students at the iSchool rather than any other stakeholder (such as faculty member or prospective student). Answering the "who" question in this case led us to focus on evaluating how user friendly the website was to students, and students alone. Focusing the data in this way demonstrates my ability to create a reasonable scope for an evaluation. Similarly, determining that the "what" would be a survey demonstrates my understanding of using appropriate methodology. In this case, our group needed to know how the website worked for current students at the iSchool, so we determined an evaluation method that would help us find answers.
Evidentiary Object 2: INFO 204 Evaluation of a Strategic Plan
This is an assignment from INFO 204 in which I had to find a strategic plan for a library and analyze it. I chose to analyze the strategic plan for the San Francisco Public Library.
Here I demonstrate my ability to create criteria using professional literature and then use the criteria to evaluate a library service. I created a chart with five rows, one for each standard component of a strategic plan, a column for a summarized verbal assessment of the component, and a column for numerically rating each component. I assigned ratings from 0 to 3 based on the standards for strategic plans outlined by Steven Buchanan, Steven and Fionnuala Cousins (2012) and by Lisa Rosenblum (2018). This assignment also demonstrates my ability to mix quantitative and qualitative evaluation methods, as my chart combines verbal assessments with numerical ratings.
Evidentiary Object 3: INFO 260A Evaluation of the Bruce Amato Magic Show
In INFO 260A: Children's Programming with Professor Penny Peck, I watched and evaluated a virtual magic show for children. This paper shows my ability to evaluate a library program using measurable criteria. For example, I gathered data about the demographics of Williamson County, Tennessee, where the magic show was held, and then compared these demographics to a visual assessment of magic show attendees. A more accurate measurement here would be to ask the attendees to self report on demographic information, but, since I was a virtual observer, this is the best that I can do. Understanding to what degree you can gather data, and its limitations or possible errors once collected, is also part of the evaluation process.
Other criteria that I assessed were the attention span of the audience (a somewhat slippery measurement of the popularity of the program) and number of attendees. These are important criterion when evaluating a library program, as popular, well-attended programs are an indication that the library is doing something right. I also counted the amount of time that the magician spent on educational content and found that most of his magical illusions contained an educational fact or learning opportunity. This is a criteria that is important for library programs, especially those for children, since one of the core values of librarianship is education and lifelong learning (American Library Association, 2019).
Section 3: Conclusion
Understanding how to evaluate library programs and services will be useful to me in my career as a librarian. Whether I need to evaluate the success of a book talk, assess satisfaction with reference services, or understand user feelings about e-books, I believe that I will be able to find the right criteria and methodology to yield useful data. Going forward, I hope to learn more about statistical analysis through online courses and reading. I also discovered quite a few resources shared by other libraries online, including program surveys, and am sure that these will prove useful in the future.
ReferencesAmerican Library Association. (2019). Core values of librarianship. https://www.ala.org/advocacy/advocacy/intfreedom/corevalues
Buchanan, S., & Cousins, F. (2012, February 14). Evaluating the strategic plans of public libraries: An inspection-based approach. Library and Information Science Research, 34(2), 125-130. doi: 10.1016/j.lisr.2011.1
Goldman, K. H. (2021). Evaluation guide for public libraries. Urban Libraries Council. https://www.urbanlibraries.org/files/KHG-Evaluation-Guide.pdf
Matthews, J. R. (2018). Evaluation: An introduction to a crucial skill. In K. Haycock & M.-J. Romaniuk (Eds.), The portable MLIS: Insights from the experts (2nd ed., pp. 255-264).
Rosenblum, L. (2018). Strategic Planning. In S. Hirsch (Ed.), Information services today: An introduction (2nd ed., pp. 231–245). Rowman & Littlefield.
RUSA. (1995). Measuring and Assessing Reference Services and Resources: A Guide. ALA. https://www.ala.org/rusa/sections/rss/rsssection/rsscomm/evaluationofref/measrefguide
bottom of page