Project Outcome

I believe that Project Outcome is one of the greatest library assessment tools or methods to have been developed in the last 5 years. It is a free, easy to use method of gathering data on the effectiveness or impact of each and all of your library services. And you can compare your outcomes with your peers to see how well your library is doing and where things could look better. I’m especially pleased how this project developed from the public library domain first before being modified to work for academic libraries.

This post, When to Use Project Outcome, can help you decide if and when to implement a Project Outcome survey for your services. I’m really interested in reading how others have implemented this; I’m especially interested in how you were talked into trying it!

Post-Beall’s List

No doubt you have heard that Professor Jeffrey Beall’s blog, http://scholarlyoa.com, with the associated lists of suspected publishers, journals and metrics, has been removed from public view at his request.  Here are some sites that have detailed timelines of this event:

The removal was sudden and unexplained, with reasons for speculation ranging from the capitalist (Cabell’s was going to take it over) to the nefarious (evil lawyers threatening legal action).  The explanation receiving the most acceptance does center on lawsuit settlement.

Several people have commented about the risk of having a single person making judgments and applying labels of “predatory”, although his criteria is clear and documented.  Perhaps this is an opportunity for the librarian, researcher, and publishing communities to collaborate in the development of a set of evaluation criteria that could applied more openly.  There could be a variety of ways that handle the evaluations – who evaluates what based on which criteria and how frequently.  It needn’t be something that everyone agrees to, but if there were more voices involved, it might gain more acceptance.

Professionally, I admire Mr. Beall for his fearlessness and his tenaciousness to start and continue with this effort.  I do not want to see it gone by the wayside.  However, I believe his use of the defamatory label (“predatory”) and his resistance to collaboration have made him and his work a lightning rod of controversy.  As a collection assessment librarian, I am always looking for tools and methods for comparing the quality of our collections.  I have always wanted to expand Beall’s methods more broadly, and perhaps now is the opportunity to do so.

Beyond correlations: Using epidemiology to establish library value

Source: Beyond Books: The Extended Academic Benefits of Library Use for First-Year College Students

Abstract

The purpose of this paper was to investigate whether there are relationships between first-year college students’ use of academic libraries and four academic outcomes: academic engagement, engagement in scholarly activities, academic skills development, and grade point average. The results of regression analyses suggest students’ use of books (collection loans, e-books, and interlibrary loans) and web-based services (database, journal, and library website logins) had the most positive and significant relationships with academic outcomes. Students’ use of reference services was positively associated with their academic engagement and academic skills, while enrollment in library courses was positively associated with grade point averages.

Based on this and a growing base of research, we in librarianship are establishing the correlation of library use and academic outcomes of various types.  We feel pretty confident that library use and good grades, quality academic engagement, and completion are linked.  I think we are ready to move on and investigate this relationship more deeply.

Consider putting this into terms of a public health problem.  We notice a correlation of incidence of cancer with people who smoke.  But is smoking causative or merely coincidental?  This article provides some statistical background to help us ground our own attempts to solve this problem.

In epidemiology, the standard of establishing a connection of agent to disease is called the Bradford-Hill criteria, which I will paraphrase with questions.

How strong is the effect?

One in 9 smokers get lung cancer, and the relative risk of getting lung cancer is 10-30 times greater for smokers than non-smokers.  That’s pretty darn strong.

What about our problem?  This article describes a study in which the authors used quite sophisticated statistical methods (multiple hierarchical regression modeling).  The effect, after adjusting for other factors (see below) on grade points, was about 1.8% overall; that is, student’s use of library services explained 1.8% of the difference in GPA from those who did not use the services, after adjusting for other factors.  However, the effect of using library databases or attending library instruction was over 13%.  Strength is in the eye of the beholder – is this strong or modest?

Is the relationship consistent & stable?

Are some groups of people who smoke more likely to get cancer than others who smoke?   Is the correlation of smoking & cancer more or less strong depending on age, race, sex, location of childhood, occupation, etc.?

Similarly, is the relationship the same for all students in all kinds of institutions in all countries?  The Library Analytics & Metrics Project (LAMP) (formerly called the Library Impact Data Project (LIDP)) conducted in the UK determined that this association did vary for different student groups (ethnicity, education-level, country of origin).

Is the cause specific to the outcome?

Do all people who smoke get cancer?  Do only people who smoke get cancer?  Not likely, so let’s get a little more realistic.  Does smoking make it more likely to get lung cancer?

Regarding using libraries and grades…while there may be many factors that result in higher or lower grades, could using libraries make it more likely to get good grades?  That is what the evidence suggests.

Why?

Why is smoking correlated with lung cancer?  What about smoking could possibly link it to lung cancer?  Over 50 carcinogenic agents have been identified in tobacco smoke.

Now, why would using library be correlated with good grades?  This may seem obvious enough to us, but consider the viewpoint of the skeptic – what about going to the library, checking out books, or attending events lead to a good educational experience?

What else could explain the correlation?

Are people who smoke otherwise more likely to get cancer?  Do people who smoke also get exposed to known or unknown cancer causes?  Would they get cancer even if they didn’t smoke?

Now, applying the same reasoning to our current problem…Why are these two aspects of the higher education experience linked?  Is it inherent? Is there some similarities between them that inherently link library use to good grades or engagement?  (Note that the correlation with engagement may be considered a tautology…engaging with libraries is itself academic engagement.)

Can more exposure cause greater change?

Researchers have established that not only is smoking correlated with lung cancer, but frequency, total years smoking, length of inhalation, and use of filters affects cancer incidence.  Thus, greater exposure to smoking can increase the incidence of lung cancer.

What about the use of libraries on grades?  Could using more service or more books result in even higher grades?  Or more simply, increase the likelihood of getting good grades?  This study did not really investigate this, but the LAMP study has – showing a limited increase in the likelihood of good grades with greater usage…to a point.

In what direction is the association?

It may seem obvious to us now that smoking precedes cancer, but it wasn’t clear when first examined.  People start smoking at different ages, and cancer can develop years before clinical diagnosis.  Studies had to clearly demonstrate that cancer indeed followed smoking in time.

Our problem is more difficult.  We’d like to think that using library resources and services leads to good grades and a valuable learning experience.  Maybe I’m a bit cynical, but I suspect that the relationship is actually the inverse – those who are likely to get good grades know the value of using the library.  And those who get good grades are most likely going to persist and succeed.  And those who are attend events on campus are likely going to attend library events.  But that doesn’t mean that it couldn’t work the other way.  Which leads to our next question.

Can the association be used to affect change?

If people never smoked, would the incidence of cancer drop?  What if people stop smoking…would incidence drop?

It has been estimated that up to 20% of all cancer deaths worldwide could be prevented by the elimination of tobacco smoking.
(article on lung cancer epidemiology)

Could students who are failing get good grades after using library resources or services?  This, I think, is what librarians are most interested in determining and demonstrating.  Consider an experiment that actively encourages a random sample of at-risk students to use library services and/or resources.  If these students show greater improvement in grades than those who did not receive this intervention, then the case is stronger that use of libraries does affect grades.

Conclusion

There is a growing body of evidence supporting the correlation of the use library services and resources and academic outcomes of different measures.  Articles like this one are contributing more evidence teasing out the knots of this riddle.  But there are more aspects that need to be explored before we can be more certain about the value that we contribute.

About Us | Prelinger Library

Source: About Us | Prelinger Library

It is amazing what I have learned this year, and the Prelinger Library is the most pleasing.  I was reviewing items for a publication award, one of which included an interview with the founders, Megan and Rick Prelinger.  They established a privately run, non-profit but publicly accessible library of eclectic material of all kinds.  Located in San Francisco, the Prelinger Library’s collection is more of an archive than a library, comprising mostly historical books, ephemera, maps, brochures and periodicals.

More surprising than the content of the collection is its organization.  The Prelingers wanted to create an experience browsing the shelves, with an emphasis on serendipity.  The books are organized by their own “unique geospatial taxonomy”, starting locally (both geographically and conceptually), transitioning outward to outer space and “abstractions of society and theory.”

Their yearbooks (see 2015, for example) describe their growth in collections and their reach into the community.  Note how they address “‘weeding'”:

The Library’s collection is never static: On any given day new material is brought by thoughtful and inspired friends, while at the same time “weeding” decisions are made (deaccessioning). Some decisions are hard, but mostly we see deaccessioning as formative: like the pruning of a tree to promote the growth of fruit-bearing branches.

Those who attended ALA that year had the chance to visit this pearl (see the yearbook’s highlight of the visit).

In addition to their physical collection, the Prelingers have delved into digitization, collaborating with the Internet Archive (Archive.org) and Getty Images to make their work more accessible.  Random digitizations I found include these gems:

Forgive me for not being aware of the Prelinger Library & their Archives before.  But now to bring this around to the focus of this blog – measuring the value of libraries.  The yearbook is not your typical annual report – there are few pieces of data, and the sections are broad:

  • Collection Development and Library Events Chronology
  • Artistic Use
  • Publication and Scholarly Use
  • Expanded Partnerships and Expanded Hours
  • Support Structures

It is clear they were telling their story and demonstrating their impact on the local, artistic and scholarly communities.  There are descriptions of visits by scholars and the work that came of their use of the archives and resources:

“My dissertation research examines the design processes of large-scale home builders in the mid-twentieth century as their industry transformed the character of the American domestic landscape. The Prelinger Library collections of housing ephemera, hard-to-find building industry journals, and period housing literature allowed me to resurrect a robust design discourse among builders largely absent from historical accounts.”

There are also published works that the Prelinger Library has been actively involved with, notably The New Farmer’s Almanac.  In the yearbook, they describe the work, but more importantly, how the Prelinger Library was integral to its revival and its content.

The only aspect of assessment that I think is missing from their yearbook is how the works that came from the use of their collections have impacted themselves on the communities.  How has The New Farmer’s Almanac been used?  What effect has the Dona Ana Sphere Project had?  They tell the story of their connection to the communities, but I think the story ends prematurely.

So, an annual report need not (dare I say, should not) be a litany of numbers, even measures with comparators, but should instead tell a story that connects the library not only with those who are directly served, but those who are indirectly served.

The Relationship Between Student Demographics and Student Engagement with Online Library Instruction Modules | Thill | Evidence Based Library and Information Practice

via The Relationship Between Student Demographics and Student Engagement with Online Library Instruction Modules | Thill | Evidence Based Library and Information Practice

The authors asked students in a “research-based composition course” to “complete an online library instruction module embedded in the university’s course management system, either before in-person library instruction or in lieu of face-to-face library instruction. ”  They then, “measured levels of student engagement by recording the amount of time students spent on each page of the online module.”

They compared engagement with demographic and GPA outcomes and determined that older and those with higher GPA had higher levels of engagement.  Regarding the latter association, I’m wondering if we in librarianship are not chasing the dog’s tail – by comparing two outcomes that are so closely associated, do we necessarily learn anything about the intervention?  In our current educational system, those with high GPA’s tend to be those who engage with the material covered.  Thus, those who engage with the material get better grades.

The authors do compare engagement with other demographic variables, and while these relationships could possibly shed some light groups for which engagement is more or less limited, I wonder what, if anything, would actually be done with this information.  Their sample was largely either Caucasian/White or Hispanic (the conflation of which may be a knot too difficult to disentangle), so it lacked diversity to enable fine group analysis.  But if African-Americans or Asian groups differed notably in a statistically verifiable way, could they, would they be able to find out why and change their approach accordingly?

But that is not may main concern here – it is now all-to-common approach of measuring the association two or more outcomes that are inherently highly correlated.  This includes measuring usage of any library resources to grades or persistence or graduation.  Yes, usage of or engagement with the material will help students get better grades. But those with better grades have already figured this out.  We need to figure out why those with lower grades do not engage or use our resources.  Why do we keep missing each other?

Measures & factors of collection evaluation

As I have been developing a model of collection evaluation – that is, examining the features of specific subsets of our libraries’ collections, usually based on subjects – I have been collecting a mental list of aspects on which to assess.  Some of these are obvious and quite traditional: number of titles & volumes (by format and by subject), uses (circulation, e-resource use) and expenditures.  There are also measures of need – number of potential and actual patrons, majors & degree programs, etc.  And then there are gaps in need – ILL requests by program members, lower-than-expected use, etc.

However, it can be quite enlightening to step back and consider what others think are important.  This article is by a librarian relatively new to her field, Christina Wray from Indiana University-Bloomington, and her approach to learning collection development “on the job”.  Three of the “four main challenges” to here job involved “understanding the characteristics of…the current users,…the current collection, and…identifying new trends in the subject area.”  I thought it would be useful to use a practitioner’s approach to check against my list of measures.

Users

  • The department’s “fit” in the “institutional heirarchy”.
    • Currently provide: Description of the department and its hierarchy, relative size of faculty, students and graduates
    • Would be useful: Org chart
  • Degrees and certificates awarded.
    • Currently provide: List of degree programs, trends in degrees awarded, online programs
    • Would be useful: ? (what do you all think?)
  • Online programs
    • Currently provide: List of online programs, relative size of online participation
    • Would be useful: size of truly distant students (versus local students who take online courses)
  • Size of faculty and students
    • Currently provide: Absolute size of faculty, students and graduates, trends in enrollment.
    • Would be useful: ?
  • “Crossover” with other departments
    • Currently provide:  Nothing (hmmmm)
    • Would be useful: Anything…I like this idea because of the emphasis on interdisciplinarity in our newest collection development model.
  • Demographics of department
    • Currently provide: Distribution of faculty by level.
    • Would be useful: Not sure…age? Length of time at university?  Is race or ethnicity important for assessing collection need?  Maybe language…
  • Number of classes
    • Currently provide: Nothing
    • Would be useful: Relative number of classes; ratio of classes taught by differing levels of faculty?
  • Research interests & specialization
    • Currently provide:  List of key topics; a word cloud of research interests
    • Would be useful: Not sure…

While discussing users, Wray mentions defining the collection, notably by call number range.  While this seems a bit oddly placed in the article, her idea is at the heart of the collection evaluation model I’ve been developing.  Our subject-based collections all have fairly specific call number ranges assigned, but the collections are neither exhaustive nor, more importantly, mutually-exclusive.  These “profiles” are compared with the courses offered and the faculty research interests to ensure proper coverage.

Which leads to characteristics of the collection:

  • Defining “subcategories” of the collection
    • Currently provide: List of subjects by call number range
    • Would be useful: visualizations of this “map”
  • Inventory of books & journals
    • Currently provide: Well, given that our profiles are purposefully broad and our library is moderately-sized, such an inventory for most collections would be too large to be worthwhile.
    • Would be useful: Complete list of subject ranges linked to the catalog; journal subjects mapped to the profile; other e-resources mapped to the profile.
  • Use (absolute & relative) of collections
    • Currently provide: Circulation by call number range.
    • Would be useful: database & e-journal usage.
  • Historical coverage of books & journals
    • Currently provide: Distribution of book holdings by publication date.
    • Would be useful: Distribution of journal holdings by coverage dates.

What I found intriguing in Wray’s article was her suggestions on applying the data to answer some fundamental questions regarding the strengths & weaknesses of the collection based largely on the relative distributions of holdings and usage.  She also offers ways to “dig deeper”, including:

  • Comparing the holdings & usage with the faculty research interests & course offerings.
  • Understanding faculty satisfaction and perception of the collection.
  • Applying the data to modifying the collection.
  • Evaluating databases & other resources.

Finally, Wray advocates reviewing usage data once or twice annually.  We are currently developing a “collections dashboard” to provide some basic, actionable metrics, and usage data of specific resources could be one such metric.

While (or perhaps, because) this article is aimed at the subject-specialist librarian new to a field, it provides me, the Collection Assessment Librarian, the issues and factors of concern to the liaisons.

 

(2016). Learning Collection Development and Management on the Job. Collection Management: Vol. 41, No. 2, pp. 107-114. doi: 10.1080/01462679.2016.1164646

Source: Learning Collection Development and Management on the Job – Collection Management – Volume 41, Issue 2

To Float or Not To Float | Collection Management

Most libraries that adopt floating collections expect circulation to rise because collections will be better distributed to meet patron demand. Yet how many have analyzed whether collections perform better after implementing floating than they did before materials were relocated? The Nashville Public Library undertook an experiment in floating with optimism. Did the results pay off? Here is how it all began.

Source: To Float or Not To Float | Collection Management

This story demonstrates the importance of regular evaluation of policies & procedures.  It is a cautionary tale of not relying on conventional wisdom and the lack of complaints from patrons.  It is also a story of how deep data diving (versus surface or superficial scanning) can and should be used to reconsider decisions.

The Dallas Public Library utilizes floating.  As a patron, I noticed that this did result in a more inconsistent collection, particularly the highly-used videos.  I’m not a reader of fiction books, but the availability of audiobooks was more sporadic.

Floating collections is a method used by a number of library systems with multiple branches or locations.  Rather than returning items requested from differing locations, the library that receives the items essentially “keeps” it.  The theory is that if one patron at that location wants the item enough to request it, then other patrons of that location may want it.  This is not unlike the theory behind demand-driven acquisitions – if one patron uses an ebook, others may use it.

Initial results were promising – increases in overall circulation were seen and these gross measures were attributed largely to the floating collection.  But the author states that not taking “other factors” into consideration “could lead to wrong conclusions.”

Indeed, the closer look that he took presented a very different picture.  For example, the circulation of fiction books decreased after relocation.  The only locations that saw an increase were in areas of the “highest income and education levels and had customers most likely to place holds.”  Circulation of popular authors and titles also declined, as well as those in large-type print.

The problem was due to pooling of these titles at locations whose customers were more likely to request transfers.  The staff at these libraries were constantly shifting the collections to make space for these titles.  So they implemented more aggressive weeding, basing selections for removal primarily on the number of copies.

Also contributing to the problem was differences in accessibility of the locations.  Those located “along travel routes to and from major job and commercial centers…often became overwhelmed by items their customers did not request and did not meet their needs,” (emphasis added).

Diagram showing connections of branches
A SEA CHANGE This diagram illustrates how materials washed up unevenly at certain NPL branches

The author does not discount floating outright, but states that it is “not for everyone,” and he makes several recommendations for librarians to essentially be more smart about their floating collections.  These ideas include limiting circulation periods and renewal options for high-demand titles, increase the frequency of notifying patrons when their items are in, thus reducing the amount of time that items sit on the shelf waiting to be picked up, and “relocating underperforming (sic) items that were needed at other branches rather than unnecessarily moving popular (holds driven) material.”

The point is, there is value in looking at what our patrons do (what they request) to shift collections, but librarians should not abandon their responsibilities entirely.  This is true of demand-driven acquisitions.  Opening a collection to any and all titles available could result in a collection that includes material outside the scope of the needs of the majority of the patrons.  Managing a DDA collection takes a lot of work to ensure the selections are within the scope of the library’s responsibilities, and are of the appropriate levels.  Enabling the patrons to make specific title-by-title selections can improve the collection.  Initial examination of our DDA collections have shown that post-selection usage was greater for these titles than titles selected by librarians without direct requests from patrons.

I’m excited to read about such efforts to base decisions on careful analysis of evidence rather than cursory looks at selected data.

Use of the Nation’s Library

The Library of Congress released some basic statistics regarding holdings & usage in 2015, nicely summarized by Gary Price for LJ.  The size of the LoC is well-known – after all, it includes the US Copyright Office.  It’s the use, though, that impresses me most (of course)…over 1 million reference requests, more than half from members of the Congress (it’s primary customer), delivering more than 20k items to them.  Circulating nearly 22 million Braille items to over 860k users.  And playing host to more than 86 million Web site visits.

Of course, size matters.  Over a million and a half physical items added each year to a continuously growing collection of over 162M.  They employ over 3,000 staff and work on a budget of $630M.

While these statistics are quite overwhelming, especially compared to even large universities like Harvard (18M items, $160M in 2014), what is missing here are statements of impact.  How were the 20K items delivered to Congress persons used in developing legislation, informing their decisions, helping them with their constituents’ concerns?  In what ways were the 86M Website visits used?  Who benefited most from the usage?  The Braille circulation is a good indicator, but more information is needed.  With the change in leadership, and the shift in political winds, the LoC needs stories. Stories of how lives were enhanced or changed due to access to resources unavailable elsewhere.  Stories of legislation that was written (or fought) with information or history dug up by librarians.  Stories of rare recordings brought to life again through preservation efforts.

Interestingly, the Web site offers a “presentation” about the Library (it doesn’t work for me, at least…does it for you?) which describes all of the collections and services that make the LoC “more than a library”.  But like so many similar “about the library” sites, they leave it up to the user (or reader) to imagine how the library can meet their needs.  Their annual reports provide more details or specific examples of  services and materials, but even these lack any sense of impact or value.  It is a long account of the inputs and outputs…items received, funding, questions answered, items circulated, etc.  Even highlighted events (meetings, tours, exhibitions, etc.) do not necessarily convey the value of the Library.  It may be implied that the Civil War in America exhibit touched the 600 visitors in ways that changed their perspective or attitudes, or that the 593K reference questions answered by the Congressional Research Service were for legislation, but where is the evidence?  Where are the stories?

Admittedly, stories alone are insufficient as evidence.  A few non-randomly-selected examples of value do not support continuous funding. But they can bring the data to life, provide that intangible quality to rather mind-numbing numbers.  And the stories need not be one-off’s, single events or individuals, but quantitative summaries of impact… the percentage of reference information cited in proposed legislation, the changes in attitudes of the visitors to exhibitions, the use of digital archives in schools or research.  Of course, finding and gathering the data or information, and writing the stories takes time and work.  Which costs money.  Some may say money that would be better spent on the services or materials for which the Library is responsible.  Perhaps…but the information gathered can also serve to guide the Library to improving and advertising collections & services, shifting priorities and ensuring R’s First Law.

 

Assessing the evolving library collections

Lorcan Dempsey from OCLC® Research recently revisited their 2014 publication on library collections (Collection Directions: The Evolution of Library Collections and Collecting), emphasizing the “facilitated collection” aspect of their report.  This refers to the idea that the networked environment reduces the need for libraries to maintain external resources (the “outside-in” style of physical collections).  Instead, facilitated collections are “a coordinated mix of local, external and collaborative services are assembled around user needs,” including selected (perhaps, indeed, “curated”) lists of external or freely-available resources, the shift from “just-in-case” to “just-in-time” selection, and the greater reliance and participation in shared collections.

The report’s take on the continuum of “shared” collections is interesting.  This ranges from the “borrowed collection” (what we typically consider resource sharing networks with their expedited deliveries), to “shared print” (which we typically associate with collaborative collections), to “shared digital” (think, HathiTrust), and onto the “evolving scholarly record”, which its increase in sharing of not only the “final outcomes” of research, but also the intermediate products (e.g. data sets, working papers, preliminary reports, etc.).

fullcollectionspectrum

So, as I always ask when considering models of library collection development, how would such collections be assessed?  Rather than focusing on what is owned or even “available”, when assessing any service, it is best to start at the end goal of the service – in this case, to “meet research and learning needs in best way”.  Indeed, as I realized when I prepared for my interview for my current position (Collection Assessment Librarian), collection development is as much a service to library users as reference and instruction.  And the service is focused on the users, not the items in the collection.

Assessing Collections as a Service

Dempsey refers to this evolution to “collections as a service” as the result of the shift from the “‘owned’ collection” to the “‘facilitated’ collection”, which itself impacts the “organization, stewardship and discovery” of the collections.  I found particularly intriguing the reference to ‘collection strategist’ job advertisement, which noted reflects a shift of emphasis to the “allocation of resources and attention”.  While this may have always been inferred as a responsibility of collection development librarians, the overt references suggest an increase in focus.  Collection assessment provides the information needed to make these “strategic” decisions. Thus, the information that is gathered needs to be directly or indirectly to the end goals of the collection.

The end goal mentioned in this report is to “meet research and learning needs in best way”.  The goals of the individual libraries may be worded differently, but often incorporate similar ideals.  For the purposes of this post, I’d like to break down this goal to determine how best to assess a collection.

“Meet…needs”

What does “meet…needs” mean?  What does it entail?  How could it be assessed?  Key terms I consider include “available”, “accessible”, “discoverable”, “findable”, “usable”, “transformable”.  Most of these remind me of the Four User Tasks associated with the Functional Requirements of Bibliographic Records (FRBR):  find, identify, select, and acquire.  As you can see, though, the word “transformable” goes beyond that fourth task of acquiring, suggesting that the needs of researchers and students is to incorporate the information resources into their activities and transform them into new knowledge.  One simple example is the ability to import citations or references into papers and other outputs.  More complex examples are available in the rising field of digital humanities, where whole texts are available for analysis and transformed into network diagrams.  So, to assess a collection regarding this aspect of the goal, here are some key measures (not comprehensive):

  • Accessibility – Percent of collection that meets ADA requirements for accessibility for all users
  • Availability – Percent of collection that is available online versus via local use only
  • Discoverability – Rate of collection added to the main library search systems per year
  • Findability – Rate of requests for materials owned that are mistakenly requested through interlibrary loan.
  • Usability – Percent of resources that meet usability benchmarks (this should include physical resources, not only Web-based)
  • Transformability – Percent of resources that can be easily incorporated into differing formats and communication modes

“research and learning needs”

While the above measures describe the ability of a library to meet the needs in terms of delivery and accessibility, they do not measure how well the collection meets the needs conceptually.  Formats, subjects, perspectives, sources, depth, scope and breadth are all reflected in this aspect of the goal.  This requires an understanding of what, conceptually, the needs are, which requires an understanding of the work being conducted at the institution, as well as the institutions goals and vision.  An emphasis on curriculum support, particularly in the traditional classroom-lecture sense, would require different sources, formats and depth than an emphasis on original research.  There will most likely be differences in this emphasis between subject disciplines (e.g. a history department with a PhD program, versus a Spanish department that offers only conversational language courses).  This is similarly true with scope – that history department may focus on American history (or even Southern United States), while the Literature department supports a world literature program.  Formats are notoriously tribal, with some disciplines continuing to rely on more traditional formats (print, even microform) and others have nearly completely transformed to digital.

“in best way”

OK, here is the most subjective part of assessment – after all, how can you define “best way”?  Best for whom?  Some of the perspectives include those of the students, the teaching faculty, the researchers, the librarians, the library administration, the campus administration, and the community.   Best in what ways?  Ease (see above notes on delivery), subject (see above notes on needs), financially, efficiency are a few of the key terms I can think of.  The priorities of these should be understood before attempting a comprehensive assessment – a financially-strapped institution may require greater emphasis on efficiency or even raw costs.  And even within each of these factors, how would you determine which way is best?  Cost-per-use is a common measure of efficiency, but which is better – Cost-per-session?  Cost-per-search?  Per record viewed?  We use the measure that most closely meets the user’s needs of the resource.  What about financially?  A resource that costs 50% of the materials budget and an inflation rate of 7% may be highly efficient (CPU less than $10), but is unsustainable for a budget that is flat.  Assessing collections relative to this part of the overall goal requires consensus on the perspectives and priorities of the competing aspects of defining “best”.

Final thoughts

Assessing or evaluating library collections which are constantly evolving requires concepts which can be applied more broadly than traditional methods.  Thus, the number of microforms is a measure that is much less relevant today than it was thirty (or even fifteen) years ago.  Conversely, success rates of users finding the resources needed for their purposes (perhaps measured at the time of visit to a library (physical or virtual)) could be applied as long to an “owned collection” or a “facilitated” one.  Finally, measures should be developed that put the object into perspective or context.  This could be relative to benchmarks set by the library or the field, comparisons with peers, or against the population of users.

Create a free website or blog at WordPress.com.

Up ↑