NISO Forum – Trends and Thoughts

Earlier this month, I went to the NISO Forum on library resource management systems, which was conveniently located right here in the Financial District of Boston.  The program was fantastic, and the presentations are now available and well worth a look, even in slide format.

A number of words, themes, and ideas resonated throughout the two-day program:

  • Agility: The real-time web is here. Terabytes are here. E-books are here. What are we going to do and and can we do it fast enough?
  • Collaboration: Dare I say, 2.0?  Not the traditional library consortium, but ad-hoc, dynamic, and extending beyond libraries to the broader research and education communities. Data curation, network-level services, putting the library where the user is – all these require collaboration beyond the traditional scope of library consortia or collaboratives.
  • Context: Lorcan Dempsey has a wonderful graphic, used by Rachel Bruce of JISC in her presentation and included in a blog post by Dempsey about the forum, that gets at the importance of context, and Kevin Kidd describes work that Boston College is undertaking in this area.  It is no longer enough for the library to operate in the library environment; we must be present and relevant in the library users’ workflows elsewhere: in the open web, in institutional systems, in the personal tools researchers use in their daily lives.  This requires reconsidering and rethinking what it means to be committed to privacy. How can we collect, aggregate, and use user data to provide services that are quickly becoming essential to our users, while still respecting and guarding privacy? Is it possible?
  • Network level: “work at the network level as far as possible” (Bruce), “working at the highest appropriate level” (Kyle Banerjee, speaking about large consortial system implementation of resource sharing and delivery), “cloud library as a shared network resource” (Kat Hagedorn, speaking about Hathi Trust’s cloud library project)
  • Open source: Experimentation and adoption for both small and large systems and services, from the consortial implementation of Evergreen discussed by Grace Liu to the Annette Bailey’s experience using open source to develop tools that work with vended systems.

(Heh, I didn’t intentionally put those in alphabetical order!)

My head was really spinning by the end, and I haven’t even mentioned all the sessions here.  Follow the link through to see Oren Beit-Arie’s keynote, Judi Briden’s presentation about the latest anthropological research at U of Rochester, and more.

Modular Is As Modular Does

We work with a lot of different vendors at MPOW (my place of work). Various parts of our e-resource administration and access are powered by products from Ex Libris, Serials Solutions, and III. (We are currently a development partner for Encore by III, but I can’t say any more about that or we’ll all have to spend the next few months quarantined in Emeryville, which actually sounds pretty good given today’s forecast.) We also just launched a Grokker visualization interface for some of our e-resources; you may be familiar with Grokker if your library licenses Ebsco databases.

It’s been interesting to work with all these different tools and interfaces on the one hand, and to hear talk about everything going modular on the other hand. We have a set up that works for us, but it just ain’t that easy to get all our systems talking to each other in the current environment. ILSs are from Mars, ERMs are from Venus. It’s challenging enough to get accurate information when your link resolver and the catalog it’s querying are made by the same company; start throwing other systems into the mix and everyone is likely to end up in counseling.

Everyone wants their products to integrate so that they can expand their markets; no one wants to share too much so that there’s an incentive to stand by your vendor. I’m curious to see where things are really headed.

Ingenta Shares Holdings with Google Scholar

This little note was squeezed into the middle of an All My Eye entry about Ingenta at the Frankfurt Book Fair:

We’ve been working closely with Google for over 2 years now, and the latest development is that we will be making our library holdings data available to Google Scholar’s Library Links program.

The full press release is dated September 25 and I’m surprised I haven’t heard about it before today.

So scholars within an institution’s IP range (on campus or using proxied Scholar links) will get appropriate copy links to Ingenta content without an intermediate OpenURL layer; Ingenta presumably gets its contented highlighted in some way; and Google gets data about library holdings, which it may already have in the case of libraries who participate in the Library Links program. The downside is that the scholar may have no idea why he or she is entitled to the full text, unless the library ponies up for IngentaConnect Premium, which adds branding to the Ingenta site.

It is unclear to me what the user will see for content the library doesn’t license and how the distinction will be made. All in all, an interesting development and one to watch.

D2D

D2D is the new thing, popping up recently on Lorcan Dempsey’s weblog and the program for NISO’s upcoming workshop.

What is it? Simply an acronym for “discovery to delivery,” the process that a person goes through to get anything from a peer-reviewed research article to a new pair of Manolo Blahniks. Four steps are often outlined: discovery, location, request, delivery.

Unfortunately, and for many reasons, much of the free web does a better job at providing a seamless D2D experience than libraries. What with our catalogs, database lists, metasearching, ILL systems, next gen catalog overlays, and tools and technologies yet to be developed (not to mention links into library resources from sites like Google and Windows Live Academic), you can expect the D2D puzzle to occupy the profession and show up on blogs, programs, and journal pages for the foreseeable future.

Library Content in Blackboard

Claire Dygert of American University Library gave an excellent presentation at NASIG about her library’s use of Blackboard to provide course-related content to professors and students.

American University and Washington Research Libraries Consortium, to which AU belongs, developed a plug-in for Bb called LinkMaker. LinkMaker assists faculty and librarians with the creation of persistent, proxied links to content for e-reserves or other use. It works with most of their subscribed online resources and–drumroll please!–they have made it available as an open-source tool.

The second thing they’ve done is to create a library “course” in Bb that provides information about what library content could be integrated into Bb. Then they model it in a sample course environment. In order to attract more interest, they don’t actually use the word “course,” but rather call it a library “site.” No one wants to join something if they think it means homework and tests!

In addition to the LinkMaker, the library course includes information and sample language about information literacy, contacting librarians, tutorials, and streaming media that professors or other course builders can include in their own courses.

I don’t see the presentations on the conference website yet, but they should be posted within about one week.