Libraries are for Use

Demonstrating the value of librarianship

Assessing the evolving library collections

Lorcan Dempsey from OCLC® Research recently revisited their 2014 publication on library collections (Collection Directions: The Evolution of Library Collections and Collecting), emphasizing the “facilitated collection” aspect of their report.  This refers to the idea that the networked environment reduces the need for libraries to maintain external resources (the “outside-in” style of physical collections).  Instead, facilitated collections are “a coordinated mix of local, external and collaborative services are assembled around user needs,” including selected (perhaps, indeed, “curated”) lists of external or freely-available resources, the shift from “just-in-case” to “just-in-time” selection, and the greater reliance and participation in shared collections.

The report’s take on the continuum of “shared” collections is interesting.  This ranges from the “borrowed collection” (what we typically consider resource sharing networks with their expedited deliveries), to “shared print” (which we typically associate with collaborative collections), to “shared digital” (think, HathiTrust), and onto the “evolving scholarly record”, which its increase in sharing of not only the “final outcomes” of research, but also the intermediate products (e.g. data sets, working papers, preliminary reports, etc.).


So, as I always ask when considering models of library collection development, how would such collections be assessed?  Rather than focusing on what is owned or even “available”, when assessing any service, it is best to start at the end goal of the service – in this case, to “meet research and learning needs in best way”.  Indeed, as I realized when I prepared for my interview for my current position (Collection Assessment Librarian), collection development is as much a service to library users as reference and instruction.  And the service is focused on the users, not the items in the collection.

Assessing Collections as a Service

Dempsey refers to this evolution to “collections as a service” as the result of the shift from the “‘owned’ collection” to the “‘facilitated’ collection”, which itself impacts the “organization, stewardship and discovery” of the collections.  I found particularly intriguing the reference to ‘collection strategist’ job advertisement, which noted reflects a shift of emphasis to the “allocation of resources and attention”.  While this may have always been inferred as a responsibility of collection development librarians, the overt references suggest an increase in focus.  Collection assessment provides the information needed to make these “strategic” decisions. Thus, the information that is gathered needs to be directly or indirectly to the end goals of the collection.

The end goal mentioned in this report is to “meet research and learning needs in best way”.  The goals of the individual libraries may be worded differently, but often incorporate similar ideals.  For the purposes of this post, I’d like to break down this goal to determine how best to assess a collection.


What does “meet…needs” mean?  What does it entail?  How could it be assessed?  Key terms I consider include “available”, “accessible”, “discoverable”, “findable”, “usable”, “transformable”.  Most of these remind me of the Four User Tasks associated with the Functional Requirements of Bibliographic Records (FRBR):  find, identify, select, and acquire.  As you can see, though, the word “transformable” goes beyond that fourth task of acquiring, suggesting that the needs of researchers and students is to incorporate the information resources into their activities and transform them into new knowledge.  One simple example is the ability to import citations or references into papers and other outputs.  More complex examples are available in the rising field of digital humanities, where whole texts are available for analysis and transformed into network diagrams.  So, to assess a collection regarding this aspect of the goal, here are some key measures (not comprehensive):

  • Accessibility – Percent of collection that meets ADA requirements for accessibility for all users
  • Availability – Percent of collection that is available online versus via local use only
  • Discoverability – Rate of collection added to the main library search systems per year
  • Findability – Rate of requests for materials owned that are mistakenly requested through interlibrary loan.
  • Usability – Percent of resources that meet usability benchmarks (this should include physical resources, not only Web-based)
  • Transformability – Percent of resources that can be easily incorporated into differing formats and communication modes

“research and learning needs”

While the above measures describe the ability of a library to meet the needs in terms of delivery and accessibility, they do not measure how well the collection meets the needs conceptually.  Formats, subjects, perspectives, sources, depth, scope and breadth are all reflected in this aspect of the goal.  This requires an understanding of what, conceptually, the needs are, which requires an understanding of the work being conducted at the institution, as well as the institutions goals and vision.  An emphasis on curriculum support, particularly in the traditional classroom-lecture sense, would require different sources, formats and depth than an emphasis on original research.  There will most likely be differences in this emphasis between subject disciplines (e.g. a history department with a PhD program, versus a Spanish department that offers only conversational language courses).  This is similarly true with scope – that history department may focus on American history (or even Southern United States), while the Literature department supports a world literature program.  Formats are notoriously tribal, with some disciplines continuing to rely on more traditional formats (print, even microform) and others have nearly completely transformed to digital.

“in best way”

OK, here is the most subjective part of assessment – after all, how can you define “best way”?  Best for whom?  Some of the perspectives include those of the students, the teaching faculty, the researchers, the librarians, the library administration, the campus administration, and the community.   Best in what ways?  Ease (see above notes on delivery), subject (see above notes on needs), financially, efficiency are a few of the key terms I can think of.  The priorities of these should be understood before attempting a comprehensive assessment – a financially-strapped institution may require greater emphasis on efficiency or even raw costs.  And even within each of these factors, how would you determine which way is best?  Cost-per-use is a common measure of efficiency, but which is better – Cost-per-session?  Cost-per-search?  Per record viewed?  We use the measure that most closely meets the user’s needs of the resource.  What about financially?  A resource that costs 50% of the materials budget and an inflation rate of 7% may be highly efficient (CPU less than $10), but is unsustainable for a budget that is flat.  Assessing collections relative to this part of the overall goal requires consensus on the perspectives and priorities of the competing aspects of defining “best”.

Final thoughts

Assessing or evaluating library collections which are constantly evolving requires concepts which can be applied more broadly than traditional methods.  Thus, the number of microforms is a measure that is much less relevant today than it was thirty (or even fifteen) years ago.  Conversely, success rates of users finding the resources needed for their purposes (perhaps measured at the time of visit to a library (physical or virtual)) could be applied as long to an “owned collection” or a “facilitated” one.  Finally, measures should be developed that put the object into perspective or context.  This could be relative to benchmarks set by the library or the field, comparisons with peers, or against the population of users.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


This entry was posted on February 7, 2016 by in Academic Libraries, Assessment, Collections, Intriguing Ideas, LIS Profession.

Join 28 other followers

February 2016
« Jan   Apr »
The Scholarly Kitchen

What’s Hot and Cooking In Scholarly Publishing

Libraries are for Use

Demonstrating the value of librarianship

Scholarly Communication |

Demonstrating the value of librarianship

Library & Information Science Research |

Demonstrating the value of librarianship

Library Collections |

Demonstrating the value of librarianship

Lib(rary) Performance

About library statistics & measurement - by Ray Lyons

Walt at Random

Demonstrating the value of librarianship

The Scholarly Kitchen

Demonstrating the value of librarianship

The Quarterly Journal of Economics Current Issue

Demonstrating the value of librarianship

Texas Library Association blogs

Demonstrating the value of librarianship

Demonstrating the value of librarianship

Stephen's Lighthouse

Demonstrating the value of librarianship


Demonstrating the value of librarianship

Reference Notes

Demonstrating the value of librarianship Truth-O-Meter rulings from National

Demonstrating the value of librarianship

Open and Shut?

Demonstrating the value of librarianship


Demonstrating the value of librarianship

Musings about librarianship

Demonstrating the value of librarianship


Demonstrating the value of librarianship

%d bloggers like this: