Elsevier Acquires bepress – The Scholarly Kitchen

In a move entirely consistent with its strategy to pivot beyond content licensing, Elsevier has acquired bepress, the institutional repository provider.

Source: Elsevier Acquires bepress – The Scholarly Kitchen

Of course, this is no longer “news”, being several days old, but the analysis in SK is interesting.  As the greatest perceived “pariah” of scholarly communication, Elsevier’s moves have been keenly watched and predictions of the company’s intentions abound.  Referenced in this article is Lisa Hinchliffe’s blog post from this past February (after Elsevier’s acquisition of SSRN), which I re-read today and found quite enlightening (again).

It is now apparent that Elsevier is shifting from selling articles in journals to more granular information and data.  So, is this something we librarians should be concerned about?  After all, they are making more and more information freely available – isn’t that what we wanted?  And the targets of Elsevier’s sales pitch will no longer be libraries at all, but the campus and organizational administration offices.  Shouldn’t we all breath a sigh of relief?

Two issues concern me – one is Elsevier’s clear attempt to acquire data that is upstream of the research process, with the intent to sell it back to the institution as information.  That is nothing new…isn’t that what journal publishers have been doing all along?  Of course, Elsevier adds value to the information, just as they do with journal articles.  The question is the cost of that value.  It could be that the benefits will be balanced by the cost – the information provided could be used to increase efficiency (driving down overall costs) or funding (in the form of grants or donations or state funds).  But like the journal subscriptions, the cost could be paid to the detriment of other services.  Unlike journal subscriptions, however, the consumers (who pay the bills) would be the primary users of the information (campus administration).  This elimination of the “moral hazard” could be the key difference that would prevent the same kind of “crisis” in pricing that currently afflicts journal subscriptions.

The other issue that concerns me is more fundamental to scholarly communication.  A key service that Elsevier is positioning itself to provide is the a platform for the entire research and scholarly life cycle – from its inception in the labs and grant proposals to its dissemination in the form of articles and the references and citations, to data storage and re-use.  The goal is to improve the efficiency of this process, particularly with the writing, submission, review and publishing of articles.  Their “waterfall” patent would decrease the time required from initial authorship to publication in any journal, even those not owned by Elsevier.  My concern is if such an improvement is ultimately good for the researchers and scholars.  Greasing the wheels of the process may result in research results being published sooner (which is good for sharing), but wouldn’t that add to the problem of information overload?  But, then Elsevier has a solution for that, as well.  Problem solved?

While future problems that may or may not come from Elsevier’s new strategy, the sheer fact that the company is shifting away from journal subscriptions (and perhaps even APC’s) could be considered a victory of the decades of efforts in developing and supporting the Open Access movement.

 

Post-Beall’s List

No doubt you have heard that Professor Jeffrey Beall’s blog, http://scholarlyoa.com, with the associated lists of suspected publishers, journals and metrics, has been removed from public view at his request.  Here are some sites that have detailed timelines of this event:

The removal was sudden and unexplained, with reasons for speculation ranging from the capitalist (Cabell’s was going to take it over) to the nefarious (evil lawyers threatening legal action).  The explanation receiving the most acceptance does center on lawsuit settlement.

Several people have commented about the risk of having a single person making judgments and applying labels of “predatory”, although his criteria is clear and documented.  Perhaps this is an opportunity for the librarian, researcher, and publishing communities to collaborate in the development of a set of evaluation criteria that could applied more openly.  There could be a variety of ways that handle the evaluations – who evaluates what based on which criteria and how frequently.  It needn’t be something that everyone agrees to, but if there were more voices involved, it might gain more acceptance.

Professionally, I admire Mr. Beall for his fearlessness and his tenaciousness to start and continue with this effort.  I do not want to see it gone by the wayside.  However, I believe his use of the defamatory label (“predatory”) and his resistance to collaboration have made him and his work a lightning rod of controversy.  As a collection assessment librarian, I am always looking for tools and methods for comparing the quality of our collections.  I have always wanted to expand Beall’s methods more broadly, and perhaps now is the opportunity to do so.

Notes from the Library Publishing Forum

It’s been a busy week at UNT – host of two meetings: The Library Publishing Forum and the 7th annual Open Access Symposium.  The dates, and themes, are overlapping – openness, transparency, shifting paradigms of ownership of scholarly communication.  The Library Publishing Forum (I’ll use LPF for brevity) included a half-day workshop on Open Educational Resources (OER – word of warning…the field is rife with abbreviations), thus bringing education & pedagogy into the fold.

Here is a word cloud of my notes of both meetings…LPF_and_OA_2016

common words jump out, like “Publish” and “Journal” and “Library” and “University”.  Notice other words… “Communism” (more about that shortly), “Open,” “Sustain,” and “Go” (I like that last one in particular).

Now, looking solely at the LPF, other words appear more distinct: LPF2016Specifically, “Faculty” and “Textbook” and “Student”.  This is the effect of the emphasis on OER’s.  The workshop was scheduled due in no small part to the number of proposals submitted on this topic.  So it reflects true growing interest in the community.

It is also interesting to me the geographic representation of the attendees.  The LPF is the meeting of the Library Publishing Coalition (LPC – I warned you).  The members are still primarily American academic institutions, although there is growing internationalism.

If you are interested, the notes that I took have been exported from OneNote into a PDF – another warning: it is long and somewhat annotated.  Recordings will be available shortly – when they are, I will post the links.

Library Publishing Forum 2016

 

Emerald | Looking Back, Looking Ahead with Jaeger, Bertot and Hines

Source: Emerald | Looking Back, Looking Ahead with Jaeger, Bertot and Hines

Here are some thoughts about the past year and the year to come in librarianship.  I think the question, “What were the biggest changes in 2015?”, is not terribly appropriate for our field.  Like an ship, librarianship (OK, pun intended) does not turn on a dime.  Even the rather radical change to demand-driven acquisitions has taken ten years to fully unfold.  A potential change may get a lot of attention (and “air” time in the professional media), but the actual change may or may never happen, and years from now, if it does.  I think a better question, and the one that the authors appear to have answered, is, “What has garnered the most attention in librarianship in 2015?”

I’m not sure if you can say “continued” anything is a big “change”…while I do not doubt the significance of these issues, libraries have been facing budget issues at least throughout the entire modern era of libraries.  And we have been doubting our relevance since I started library school when the Web was a neonate.  Now, the emphasis on community, while not exactly new, I believe did gain momentum.

I’m intrigued by the “sustainable development” concept for 2016…and not thrilled about the prospect of “leaner” staff models.  How much fat is left to trim?  I agree that more libraries will be opening up their spaces and physical books will likely be moved or removed.  Whether that is good, bad or just different…time will only tell.  If we have “leaner staff models” how can we continue to grow new services (community engagement) and maintain our current services (providing a wide selection of print books in open shelves)?

So, what are your thoughts?

The Cost of “Doing More With Less” | Library Babel Fish

We have less funding for the things that really matter while paying much more to compensate for austerity policies.

Source: The Cost of “Doing More With Less” | Library Babel Fish | Inside Higher Ed

Key statements that stood out to me include:

When we’re told to do more with less, we end up building a costly apparatus for generating income while cutting things that actually support the organization’s mission. That distorts everything.

and…(emphasis added)

The public becomes distrustful of higher education because it costs too much – because we aren’t sharing those costs collectively – and it’s warping the academy.

and more closely to the focus of this blog…(again, emphasis added)

…we are trapped in a strange world where everyone needs to publish more to prove their worth.  That requires more access to more research, even for small institutions, so we’ve outsourced much of our collection building, first to aggregators of electronic journals who can provide us the most for the money; now to individual publishers as we stretch our budgets by buying access to one article at a time for one user at a time. There’s nothing collective about it. Temporary access for individuals comes at the expense of access for many and access in the future.

Not wanting to excise any more of the short post, I will summarize the final argument as an appeal for libraries to participate in new models of scholarly communication that more effectively share the costs.

I wholeheartedly agree with the Ms. Fister’s argument about collective good and shared costs.  These attempts at austerity and reduced taxation do benefit only those with the most money and, therefore, the most power.  And the policies could indeed worsen our society in the long run by increasing the disparity of classes of people.

Being at the center of a shift in collection development from “just-in-case” to “just-in-time”, I’ve had my own concerns.  At this time, I am ambivalent about the potential consequences of either philosophy.  The result of the traditional method of speculative purchasing of permanent ownership of material has been collections that remain largely unused, especially so for the largest collections.  (I would be interested in comparing the efficiency (as measured by the rate of uses) of smaller versus larger collections.) It is no wonder that we consider the alternative of “renting” access.  By shifting our focus from serving collections to serving our patrons, we have adjusted our expectations of what our collections should be and do.

I also understand how this shift has adjusted the balance of power in the publisher and library relationship.  No longer is the relationship simply seller and purchaser…what is sold is merely access, not ownership.  Thus, the seller continues to retain control over the content, with the purchaser (libraries) being beholden to the demands of the seller.  Of course, there is still one card that libraries have in their hand…we can cancel.  But, because we rent the access, the library is left with no access at all – to the past as well as in the future.  This is the “ace in the hole” that providers hold over libraries.

The difficulty is striking that balance of developing collections that meet the immediate and short-term needs of the patrons and the long-term needs of the broader community (local and global).  We as librarians and other professionals closely associated with scholarly communications are still feeling our way to this balance.

Maintaining relevance in the age of OA

Librarians have been at the center of the Open Access movement, stemming in no small part from the unsustainable rise of journal expenses.  But, as more and more articles articles become freely available (free, as in beer), how will libraries maintain their relevance to our users?

I know, yet another existential, navel-gazing post by a Chicken-Little on the future of libraries.  Actually, no, it is not.  Because I have little doubt in the near- and medium-term future of our profession (and, short of total extinction of mankind due to ecologic or planetary tragedies, our long-term looks good, too).  But the question still remains…what if the majority of articles that our patrons use were freely available on the Web?  How would libraries remain relevant to our (former?) patrons?

What got me thinking about this was Aaron Tay’s posting about his own foray into this issue.   His question was: What percentage of citations made by our researchers is to freely available content?  His initial results were actually quite astonishing: 80% of the citations to articles by his institution’s economics faculty were freely available.  Of course, Aaron posed a number of limitations to this result, post problematic being that the timing of availability was unknown.  The papers he examined were from 2009-2015, but it is not known when these papers were available at the time that the authors gathered them.  That timing problem is a major obstacle to doing citation analysis for this question…but that’s another story.

We are aware of the numerous studies suggesting that academic faculty are using library-specific tools less and less for finding the resources they need.  By “library-specific”, I’m referring to the more traditional tools that librarians have developed or maintained, including the catalog, ejournal lists, and even the newer Discovery tools.  Instead, our faculty are turning more and more to Google Scholar or the overall Web.  Turning aside issues related to efficiency, let us assume that most of the faculty use GS most of the time to find articles (we will deal only with published journal articles for this thought experiment).  Will they find the articles they need to be freely-available?

Aaron cites a number of articles that have attempted to answer the corollary to his question: What percentage of articles on the Web are freely-available?  Their estimates ranged from 20% to 61% (Aaron – you have the foundations of a good systematic review and possibly a meta-analysis…keep going!).  Based on two studies that appeared to him to have more valid methodologies, Aaron estimates about 40%.  What if this rises to a critical mass of, say, 60% or 70%?  Could librarians strategically cut their costly and burdensome journal subscriptions?  Would libraries need to continue to be the purchasing agent (or, “wallet” as Aaron puts it) for the faculty – at least for journal articles?  If so, how would we remain relevant, especially to the disciplines for which scholarly outputs are primarily journal articles?

We could respond to this challenge as we did at the dawn of the Web, developing our own solutions, such as attempting to “catalog” it.  Or we could look at what we can and cannot control, and focus on the former and let the latter go.  We are often described as being middle-men when it comes to the flow of information.  We attempt to meet the needs of our patrons, which are very different institution-to-institution.  But we cannot control either side — neither what publishers do, nor what our patrons do.  For instance, just because a version of the article is freely-available does not mean it is easily accessible.  Indeed, publishers have attempted to build-in friction into their OA models, actively resisting attempts to make the OA versions less accessible or desirable.  And just because we expend significant resources to making our expensive resources available (via Discovery services or the catalog) does not mean our patrons will use them.

So, how do we remain relevant?  When I ask that question to myself, I next ask, why should we remain relevant?  Aside from my own future employment, why is it important for libraries to remain relevant as a middle-man?  After all, many middle-man occupations have gone by the wayside – people now buy many things once only available via salesmen.  My response to myself (and now, you) is that information is too important to be driven solely by market forces.  Information is what our decisions are based on, decisions that affect our lives, our livelihood, our future.  Access to information is a core value of librarianship, and by abandoning this to the market, we as a society risk being manipulated by those who control the information.

Given this, perhaps we need to re-think our role as ‘middle-men’.  Middle-men typically are independent, beholden to neither party – they attempt to meet the needs of both parties – the producers and the customers.  So, I would argue that librarians are more beholden to the people or consumers of information than we are to the producers.  True, we need to ensure that producers of information can be sustained, but it is the information itself that is important to our patrons, our people.  So I would posit that we librarians can and do maintain our relevance not by being middle-men or agents, but rather ombudsmen or advocates for the consumers of information.  Now, this should not mean that we control the flow of information, but that we ensure that quality information is available and accessible to our patrons.

How We Pay for Publishing by Kevin S. Hawkins | Against The Grain

How We Pay for Publishing by Kevin S. Hawkins (Director of Library Publishing, University of North Texas Libraries) | Against The Grain.

Key point in this article (emphasis added):

While the problem has been created by commercial publishers skimming the cream of academic publications and then charging handsomely for access to these prestige brands, it has been difficult to effect change in the system because scholars are the consumers of the content but only rarely the purchasers of it; as with health care and prescription drugs, the true costs of market consolidation and intellectual property protection are not borne by the consumers of the services and products.

…and later,

There is increasing acceptance that a university press is a mission-driven operation that cannot be expected to balance its books at the end of each fiscal year…

This article leads me to wonder, yet again, about the nature of the relationship of libraries and providers of content.  Specifically, what price are libraries willing to pay for this content?  While this could be answered by looking at what libraries have been paying, I’m not sure that answer would be valid.  For one thing, libraries have not always been fully-informed customers.  Furthermore, we have not always considered alternatives to the content.

The economics of librarianship is nearly as complex as that of the health care industry.  Are there lessons we can learn?

Housecleaning at the Directory of Open Access Journals

This is interesting news from DOAJ. I’ve been concerned about using this source in our Ejournals list and our Summon. It will be interesting to compare the difference pre- and post-revision.

Notes from Cowtown: NASIG 2014 – Saturday

This continues my notes from the North American Serials Interest Group’s 2014 conference, which was in Fort Worth.  I actually think that downtown Ft. Worth is nicer than downtown Dallas.  This may be because I rarely go there and it’s usually quite new to me.  Downtown Dallas is old hat with few pleasant surprises.  It also seems more hum-drum and down-to-business.

Getting Lost

In my first post about NASIG, I had neglected to mention that I had actually gotten lost on the way in.  Yes, lost in Fort Worth.  This was due to a combination of over- and under-confidence in my own memory and sense of direction.  The last time I was in Cowtown was for TLA over a year ago, but I thought I had a pretty good memory of the route to take.  Then I doubted myself when I ended up going south on Interstate 820 (the main loop around town) (“I don’t remember going this way…”).  Rather than stay where I was, I jumped off the highway, wholeheartedly believing that State Highway 183 was what I needed to get directly into downtown.  It felt right, until the signs for 183 directed me right, which felt like north, away from where I thought I was going.  Yes, indeed, I ended up crossing IH 35W (that goes north & south), and I was heading away from downtown (“yep, there it was…downtown”).  Yes, GPS would have been nice, but then I found Main Street. You just can’t go wrong getting on Main Street.  Needless to say, I did not get lost on Saturday.

Vision Speaker: From a System of Journals to a Web of Objects, Herbert Van de Sompel

I was very interested in hearing the Vision Speaker for today, Herbert Van de Sompel.  I remember first hearing him at a NISO meeting in Dallas way back in 2007, but I had already been following his work on MESUR.  So I was interested in “catching up” with his work.  MESUR itself has become the basis of altmetrics, in that it measures usage at the article level, rather than the journal level.  He listed his other projects, notably Hiberlink and Memento.  These are Web archiving infrastructures, which apparently is the focus of his efforts nowadays.  However, it is more than just saving files.  He is concerned about the long-term viability of scholarly communication on the Web, and key to this problem is that “increasing accessibility of machines leads to better human tools.”  So his work is not unlike the Linked Data approaches that Richard Wallis & OCLC are pursuing – the former with linking and archiving, and the latter with metadata.

Herbert showed how his work and projects are supporting the machine-to-machine communication of the scholarly communication functions on the web:

  • Registration
  • Certification
  • Awareness
  • Archiving – Hiberlink
  • Rewarding – Mesur

Herbert noted that libraries are central to the long-term accessibility of the paper-based scholarly record, but have been left out of the web-based records.  (Note to self: Can we recover this?  Should we recover this?) While he is developing the infrastructure for tracking changes (e.g. links, versions, etc.), there are still too few scholarly communications archives – and those that do exist (to reiterate Katherine’s comment from Friday), are the easy ones and the ones least at risk of changing or disappearing.  He mentions Keepers Registry of journal archives.  In particularly, we have very poor archives of the non-document-based Web objects (e.g. software, videos, data, etc.), because nobody has taken up the notion of guardianship.  Herbert also noted “Reference Rot” – a combination of “link rot” and changes to the the content (e.g. of wikis, Web sites, even journal articles).  Here he showed a graph illustrating the dramatic increase in the number of links to things that are not journal articles, along with a 20%/year link rot rate.

In summary, Herbert Van de Sompel is developing solutions and advocating for a pro-active web archiving approach, suggesting that we start with our own institutional web sites.

Concurrent Session 1: Actions & Updates on the Standards & Best Practices Front, Laurie Kaplan (ProQuest) and Nettie Lagace (NISO) 

I really needed to catch up on the standards coming out of NISO.  I used to pay particular attention to these when I was more involved on the back end.  Now that I’m more tangentially involved between front- and back-ends, I’ve let this slip off my radar.  What was particularly enlightening was learning the difference between a “standard” and a “best practice”.  The former is developed using a more formal process with more people, more comments, and more time.  The result is enforceable and fixed, with changes following a recursive procedure.  The standards tend to cover issues that have been well-researched and address problems already identified. While standards are the business suits, best practices are  business casual (my analogy, not theirs).  They tend to cover emerging issues and technologies.  The time to development is quicker, involves fewer people and is more like a “gentleman’s agreement”.  The result is, of course, not enforceable and more fluid.  Participants can implement all or parts or none.  But at the very least, there is something that some people agreed to do the same way.  After learning this difference, the presenters discussed four key best practices: KBART, PIE-J, ODI and OAMI.

  • KBART: Knowledge Bases And Related Tools
    • These are guidelines aimed at the developers & users of the OpenURL databases (e.g. link resolvers). These address problems with the KB’s, notably wrong information, outdated information, etc.
    • KBART is a metadata exchange format, which explicitly lists names and definitions of metadata elements.
    • Phase II is available now and incorporates:
      • metadata for consortia
      • Open Access metadata
        • Addressing the problems OA linking, including hybrid OA models and article-level access (KBART is title-level)
        • NOTE: The solution is a metadata element that identifies a title as either “F” (100% free) or “P” (<100% free).  There is no in-between, at least until KBART gets to the article-level.
      • metadata for ebooks, book series, and conference proceedings
  • PIE-J: Presentation and identification of ejournals
    • This is a standard, er, best practice aimed squarely at publishers and ejournal vendors.  Essentially, please, please, please use standard ISSNs, titles, and citation practices!
    • The latest update includes not only the definitions (e.g. of a title), but also good examples (no bad ones – there would just be too many 😉
    • There is also a template of a letter libraries (and ejournal vendors) could use to request a publisher (or vendor) to follow the best practice (remember, it’s not a standard).
  • ODI: Open Discovery Initiative
    • This effort is to promote transparency in discovery systems.
    • The work is nearly completed – it is in the voting stages now.
    • This best practice defines:
      • ways to assess publishers’ participation
      • models for fair linking
      • measuring usage of a publishers’ content (NOTE: this is not usage of the discovery system itself, but of the content).
        • This is an issue near & dear to my heart as I’ve been struggling to determine how much of database is being used via our Summon.
    • There are four groups of practices:
      • Technological – data format
      • Communication of libraries’ rights and permissions regarding linking
      • Definitions of fair linking
      • Exchange of usage data with the publisher
  • OAMI – Open Access Metadata & Indicators
    • The purpose of OAMI is to standardize how publishers indicate accessibility.
    • Those involved initially wanted to develop standard visual indicators (e.g. icons), but they could see they were going down a rabbit hole with no way out.
    • Now they’re focused on defining metadata, “Just the basic facts” – is it free to read?
    • Developed 2 tags:
      • free_to_read – includes start dates for embargos
      • licence_ref – link to the licence

<Shameless plug>Concurrent Session 2: The Quick and the Dirty: The Good, the Bad, and the Ugly of Database Overlap at the Journal Title Level, Karen Harker & Priya Kizhakkethil</shamelessplug> 

This session focused on the specifics of determining overlap of abstract & indexing databases, as well as full-text aggregators and journal packages.  This was one of my first projects here, and I’ve finally been able to bring it out.  The graduate library assistant, Priya, did much of the work in the second phase, so it was only right she discuss what she did.  While Priya is graduating with her MLS this Saturday, she is pursuing a PhD, so I’m very fortunate to be able to keep her employed here.  Well, you can view the presentation yourself to get the nitty-gritty.  Essentially, we tried several sources of journal coverage to help us determine the overlap of these journal-based resources.  Our conclusion is that the freely-available sources (Cufts & JISC ADAT) are good enough for the A&Is, and your own linking service should have a good tool for full-text comparisons.  But there will be instances where downloading the journal lists yourself is the only way.

<shamelessplug>Concurrent Session 3: Planning for the Budget-ocalypse: The evolution of a serials/ER cancellation methodology, Todd Enoch & Karen Harker</shamelessplug>

Yes, this was my second presentation – right in a row.  My only complaint was that I missed Michael Levine-Clark’s session on ebooks.  This was yet another “how we did it”, but I think it was novel enough to warrant presentation at a conference.  Regular readers of my blog (most of whom share the same employer) are aware of the financial struggles the UNT Libraries have faced over these last few years.  When I was hired, I knew that my work would be focused on reducing resources.  This session summarized the methods we have “evolved” over the last few years – from the simple, single-cell organisms of across-the-board cuts and elimination of duplicates, to the complex flora and fauna involving distribution of usage, relative assessments, and percentiles.  Todd provided the foundation and the history of the resources and initial steps, notably the e-conversion project that was accelerated.  The latest round of cuts (our “budget-ocalypse”) kicked the evolution into high-gear, enabled by the matrix model of resource evaluation written by Gerri Foudy & Alesia McManus in 2005 and published in the JAL.  Key features inherited from the article include:

  1. Using 3-year average uses
  2. Rating resources on scope, ease of use and breadth
  3. Using a 5-year “inflation factor”

Mutations to this process included:

  • Using the usage measure most closely associated with the outcome of a user’s session (e.g. full-text, abstract view), rather than the lowest common usage measure
  • Looking at the distribution of usage across a package of ejournals
  •  Evaluating the resources’ “scores” by type, comparing each resource with its peers

Well, you can learn more by viewing the presentation…

Concurrent Session 4: ORCID Identifies: Planned and potential uses by associations, publishers, and librarians, Barbara Chen (MLA), Gail Clement (TAMU) and Joseph Thomas (ECSU) 

While setting up and ORCID profile is free to researchers, ORCID has members who pay to support the ORCID program.  The organization, in turn, has created API’s to help with that machine-to-machine communication that Van de Sompel referred to that helps create useful human tools.

Barbara Chen described how the Modern Language Association has used ORCID to help with their International Bibliography.  They encourage (not require) their members to create an ORCID ID, and then they use ORCID’s Import Works function to create a source based on the MLA International Bibliography.

Gail Clement described how they integrated ORCID on a mass scale with all of their graduate students.  Their goal was to ensure that all students have a “scholarly identity” at the start of their careers.  They hope to use this to track student outcomes over time.  They got funding from both ORCID and the Sloan Foundation to develop the tools and provide the support to implement this.  Key in the success of getting the grant was the support from the provost. Eventually they will expand to faculty, but their primary emphasis is on students.  Essentially, every graduate student had a profile in ORCID initiated or “minted”.  But it was up to the student to “claim” that identity.  Here was the rub.  Only about 20% of identities were “claimed” within 2 months of minting.  The most common reason for not claiming was that students were not checking their school emails (UGH!).  But using a combination of communication methods, Gail was able to get most of the IDs claimed.  She integrated ORCID help documentation into a LibGuide to help reinforce the connection between ORCID & TAMU.

Joseph then described the much more low-key efforts to extend ORCID’s reach to faculty at East Carolina State University.  This was one-on-one outreach, accomplished by integrated ORCID into the training, instructions and presentations of the research resources (e.g. PIVOT, SciVal, Vivo, etc.).  He concurred that getting administrative support was key to successful reach.

I would be very interested in being part of an initiative on our campus, but I don’t know how that would fly.

Sessions I missed:

One of the adverse effects of presenting at a conference is that you miss others’ presentations.  Here are two that I was disappointed that I missed:

 

Create a free website or blog at WordPress.com.

Up ↑