Libraries are for Use

Demonstrating the value of librarianship

Intriguing new measures


We just recently finished evaluating our subscription resources for the express purpose of de-selecting or cancelling.  This has been a year-long process that included gathering as much usage data that could be scrounged up and merging that data with costs.  Like most libraries going through this process, our key measures were usage, cost and cost-per-use.  We attempted to include other measures, such as impact factor, but this was time consuming and coverage was spotty, at best.  So, what other measures would have been useful?

The article by Bergstrom, Courant, et al. recently published in PNAS, “Evaluating big deal journal bundles“, has caused quite a stir, and I’m finally able to study the article in detail.  While their theories really require more microeconomic background than I have at this time, I was intrigued by the measures they used, notably cost-per-article and cost-per-citation.  Both of these are measures of value, but the former is a measure of value of quantity, while the latter purports to be a measure of value of quality.  Ted Bergstrom has been collating this data regularly for the last ten years on his site, JournalPrices.com, from which the data is freely available for downloading.  I was able to integrate this data into the selection decisions for one package by using the value rating (Good, Medium and Bad) to filter out titles, particularly those which had moderate usage and cost-per-use.  While it was easy to select the titles with the lowest CPU, and to exclude those with very high CPU due to low usage, more information was needed for titles in the middle, as well as the expensive titles with moderate usage (which pushed the CPU’s up).

One problem with using the Bergstrom’s data is that the cost data is not specific to our institution.  They get their prices from the publishers, which may or may not reflect what the institution actually pays.

The article highlights the differences that institutions pay, which can be extreme.  A cursory look at the average prices by university category seems to suggest that our prices most closely match those in the Research 1 category, in which we were classed.  The authors used the “old” Carnegie Classification, and the University of North Texas is officially listed in the High Research Activity for the current class.  It took a little googling to find the list of institutions under the old list, but University of Washington had just such a list available, and UNT was classed in Research 1.  So, that makes sense – we’re paying about the average, based on a very cursory look.  My next step will be to look at the data set more closely to calculate a correlation of the price data with our costs by publisher or subscription.

Another problem with using Bergstrom’s data for selection purposes is that, as he openly admits, the citation data is also limited to only those titles indexed in ISI’s Web of Science.  There is some controversy surrounding the admittance to the inner sanctum of WoS, but it can be argued that this is generally accepted by the scientific community that admittance itself is an indication of quality, at least among the STEM and the quantitative social sciences.

Given these caveats, would it be valid to include this measure as one factor in the selection or de-selection formula?  If so, how?  For one package, I used a blank (meaning not available) or a bad rating to exclude those middlin’ titles.  This helped, but was not enough to whittle the list down to one we could afford.  More measures are needed.

While journalprices.com ratings are good for journals, what measures (besides plain ol’ circulation) are useful?  An article I happened on (I’m not sure how) was published in 2010 – man, I thought I was fairly up to date on the literature of collection evaluation and assessment, but there is really so much more out there.  In “A Performance index approach to library collection,” the authors applied a more generalized formula of the h-index, called the p-index or performance index, to books and circulation.  The p-index weights the number of loans or circulations (quantity) by the number of books loaned (quality).  This measure could be calculated for groups of monographs by subject or publisher.  The h-index indicates the number of articles an author has had published that have received the same number of citations or more; an h-index of 11 means that the author has had 11 articles that have had at least 11 citations.  Obviously, the higher the number, the more prolific and the more highly-cited she is — a measure of quantity and quality in one.  Similarly, the p-index of 11 means that that subject or publisher has had 11 books that have had at least 11 circulations.

So there are now two additional measures to add to my growing list of collection assessment measures.  Right now, I keep this list in SharePoint on our campus intranet.  But I would like to create library of sorts for these measures – one that is accessible to others, easily searched, includes details about each parameter, references the source and any articles and/or books that have used or discussed it, and provides a platform for discussion about each measure.  The site would need to be highly structured, more like a database and less like Wikipedia.  Unfortunately, there are few easy-to-use database systems that are free and accessible.  This would be a good project for an MLS internship, eh?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Information

This entry was posted on October 26, 2014 by in Assessment, Collections.
The Scholarly Kitchen

What’s Hot and Cooking In Scholarly Publishing

Libraries are for Use

Demonstrating the value of librarianship

Scholarly Communication | Scoop.it

Demonstrating the value of librarianship

Library & Information Science Research | Scoop.it

Demonstrating the value of librarianship

Library Collections | Scoop.it

Demonstrating the value of librarianship

Lib(rary) Performance

About library statistics & measurement - by Ray Lyons

Walt at Random

Demonstrating the value of librarianship

The Scholarly Kitchen

Demonstrating the value of librarianship

The Quarterly Journal of Economics Current Issue

Demonstrating the value of librarianship

Texas Library Association blogs

Demonstrating the value of librarianship

Demonstrating the value of librarianship

Stephen's Lighthouse

Demonstrating the value of librarianship

ResourceShelf

Demonstrating the value of librarianship

Reference Notes

Demonstrating the value of librarianship

Politifact.com Truth-O-Meter rulings from National

Demonstrating the value of librarianship

Open and Shut?

Demonstrating the value of librarianship

N S R

Demonstrating the value of librarianship

Musings about librarianship

Demonstrating the value of librarianship

LISNews:

Demonstrating the value of librarianship

%d bloggers like this: