Saturday, December 29, 2012

Signal from the noise

I was talking to a couple of students from my university in a online chatroom* and one mentioned that he was doing his literature review for his final year thesis**. Of course, I asked out of professional curiosity how he was doing it, and he mentioned he just used Google Scholar.

Having just registered my institution with the Google Scholar Library Links programme, of course I took the opportunity to tell him about it. He was blown away by this and then he said

"You should really publicize this. There is no point having something like this if you don't tell people."

The really interesting thing is as we discussed we realized that this was easier said then done.

Firstly, we do send mass emails on important library changes (though not this one yet), for example about the recent launch of our new discovery service - Summon, but he admitted he missed that one. Like many students he gets bombarded by so many emails that he just junks most of them without reading. Though I have heard students say they actually value the library ones, they just can't find it among all the rest....

Secondly, even though he is a fan of our Facebook page, he didn't see any announcement on the Google Scholar library links either, no doubt either because it didn't appear in his newsfeed, or he just missed it , in between large number of updates from friends. He's not on Twitter but that just runs into different problems.

And then he asked me , "This is a new thing right? The librarians who came earlier this semester to help us for the final year project didn't mention this".   

And then it hit me, you can market all you want with mass online channels , but nothing beats the personal attention of reference liaisons. In fact, come to think of it, wasn't this how he finally learned about this new feature?

A one-to-one chat with a librarian friend (albeit not his liaison) .

Okay, perhaps this seems obvious to most of you reading this, but still a good reminder of the value of a personal encounter.

As this is probably the last blog post from me for 2012, I would like to thank all of you for your support & interest in my blog. Here's to wishing you all a happy new year!


* Without going into too many details, I engage with students from various faculties who gather around a certain non-institution community site. Over the years, this has proven to be invaluable in getting a feel of what the student body really thinks and what concerns them.

** For privacy reasons , I have combined a couple of chats with different students into this one account without changing the essence of the story.


Tuesday, December 25, 2012

Library Now! Google now technology applied to libraries?

I am sure most of you have heard of Google Now available on Android Jellybean it is not just a intelligent personal agent like Siri that answers voice queries but more impressively there is a "predictive" component that can intelligently display content that Google thinks you will need before asking for it.



Here's the promise

"Google Now gets you just the right information at just the right time.
It tells you today’s weather before you start your day, how much traffic to expect before you leave for work, when the next train will arrive as you’re standing on the platform, or your favorite team's score while they’re playing. And the best part? All of this happens automatically. Cards appear throughout the day at the moment you need them."

I've blogged last year about what Siri could possibly do with library related functions, while interesting most of those functions can be pretty much be done now with sufficient coding skills and apis, but Google Now sets a much higher bar, and even now it is pretty limited in what it can do.

Still, one can dream. What if 10 years down the road, Google Now type predictive technology has matured and is the norm and perhaps married with Learning analytics , how would libraries use it to serve users?

One thing to note is that serving our users or members the information they need, when they need it, is the essence of librarianship.

I've blogged in the past that user needs and demands are essentially predictable in advance , as in for an academic library predictably every year in Aug new students would get lost in the library, near Oct when the exams are going to start in the next month, they start wondering where to get past year exam papers and end Nov/ early Dec, they start wondering about library hours during vacation and the possibility of bringing books home on long term loans with them etc.

So one could anticipate such events in advance an post (or even preschedule) information on such common needs to be emailed, tweeted or posted on Facebook in advance.

The problem with this approach of course is that you can also target generalities, most undergraduates would need this in Oct and need that in Nov but not all.

Susan Gibson &  Nancy Fried Foster's work  at Rochester University, showed that among other things, teaching fresh undegraduates information literacy during orientation week is not the best use of time, as the students are grappling with other more immediate needs such as settling in dorms and the prospect of writing term papers seems to be far off.

How much better if we could specifically target each student exactly when they need it.

Of course that is what we have subject liaisons for, to specifically target different segments, but even subject liaisons can't go to everyone in person & their timing may be off unless they are very experienced.

Here's what I envision, an app - call it "Library Now" would be a courseware app similar to Blackboard  Mobile Learn or my institution's own IVLE app. As such it would know which courses you are enrolled in and deadlines for assignments would be linked to your calender.

Based on such deadlines it would intelligently display needed resources. For example it would warn students perhaps a few weeks before the first assignment is due the availability of a writing hub. If married with learning analytic  it would know which students would need more help with writing and even prompt the student to contact his specific subject liaison.

It could alert students to either upcoming sessions by libraries of interest, or push learning objects such as short videos specially crafted for each specific need eg videos on citing references or more specific subject related resources.

The location based aspect would detect the user is nearby the library or bookdrops and would display books that are due soon and might prompt him to return.

Within or near the library it might tell you that your subject liaison is currently in or on desk and you might want to pop-in to ask questions.

Depending on how it handles indoors location , it might tell you books or resources of interest as you walk past (think book recommender like Huddersfield's but more advanced ).

Like Google it would track your searches including library related ones like with the web scale discovery system Summon and combined with data of your modules give you more relevant results and perhaps even learn to anticipate and serve up relevant library guides, databases and Faqs.

To some extent this is already done, for example in MLibrary's Putting a Librarian's Face on Search.


"When you do a search on the University of Michigan Library's web site, you get not only results from the catalog, web site, online journal and database collections, and more, you also get a librarian who is a subject specialist related to your search term. While the matching is not perfect, it provides a human face on search results. So, for example, if you search for "Kant," in addition to books and databases, you also get the subject specialist librarians for humanities and philosophy." 





More recently, I stumbled upon some work that tries to map searches in Ebsco Discovery Service to call number ranges and then the appropriate research guides to display.

And the recently launched Summon launched a suit of services called Summon Suggestions, which allows librarians to enhance search results by creating smart-tags to trigger databases or even "best bets" (Some text + link) that appears on the top of search results.



A advanced system that learns using these and other methods would made up a profile of what you are generally interested in and hold it in reserve and when you most need it, it could recommend these resources. It could even when asked to "explain" why such recommendations were made.

It might even learn your studying habits and give you recommendations on where in the library or campus you might like studying at, depending on noise level , availability etc.

The possibilities are endless and I only touched on supporting students and not researchers.

Okay I've gone overboard with the possibilities, such a system would be at HAL level and might go Skynet :)

One little problem besides that, even if all this was technically possible say in 2020, many libraries would still hold back because of the fear of violating patron privacy.

A recent article - As Libraries Go Digital, Sharing of Data Is at Odds With Tradition of Privacy   spells out some of the concerns from libraries tweeting recently returned books to recommendation systems.

Again this is the age-old question our profession faces how much privacy should we safe-guard for our users, when many of them don't really care and pretty much give it whole-sale on far less trust worthy entities.

Should we handicap our abilities to compete to the point that even anonymized data collected on aggregate level with safeguards that are used for recommender systems come into question? Much like personalized, individualised data that a system like Google Now uses?





Friday, December 14, 2012

Some big picture thinking - my favourite "future of academic libraries" articles in 2012

It's the end of the year, and perhaps it's time to take stock of what went right and what to work on the next year.

Like every year before this and perhaps all the way back to the dawn of librarianship, we librarians held conferences, symposiums, seminars, unconferences, library camps, to discuss about the future of academic libraries and or librarians (I was even a panelist on one!), while visionary keynote speakers, library/publisher think tanks released dozens of scenarios, white papers, position papers, manifestos, top trends trying to predict how the future will turn out or how we should be in the future.

Throw in the new wrinkles caused by potential disruptions caused by the rise of MOOCs and the promise of Open Access finally bearing fruit and it seems it never gets old talking about the future.

Unlike 8 Articles about the future of libraries that made me think (my 5th most popular post ever), which were mostly more devil's advocate type posts that postulate the death of the libraries, these are more positive ones trying to map out a viable strategy to get to a positive future.

Here are some of my favorite talks, articles on the subject in 2012.


1. Reconfiguring Library Boundaries by Lorcan Dempsey

Lorcan Dempsey  Vice President and Chief Strategist from OCLC is of course a very well known thinker. He has a nice Wikipedia page that summarises the impact he has had on the field, while I was aware he popularized the term "web scale", I wasn't aware he also coined the term "amplified conference" and "discovery happens elsewhere" etc.

So what does such a influential thinker have to say about the future of the academic library? Watch the video below entitled "Reconfiguring Library Boundaries" given in Feb 2012.




This video encapsulates a lot of the ideas he has developed on his blog over the years, with regards to discovery and the concept of libraries operating at different levels from institution to group to network/web scale (I admit to be confused on the difference if any between web-scale and "network level")




At the risk over simplifying & mis-interpreting his ideas, he (and OCLC), recognize that our researchers are starting to operate at web scale or is it network level (remember discovery happens elsewhere - Google, PubMed, Mendeley etc ) while our libraries are still currently mostly at institution scale.

This is because as the environment moves from resource scarce to resource abundant and attention becomes scarce, researchers are shifting away from building their workflow around libraries and operating directly at network level via gateways like Pubmed, Google Scholar and tools like Dropbox, Mendeley etc.

So libraries should also move up to the higher levels, to aggregate both demand and supply (another interesting idea from him) and "make data work harder"

The video also gives a very nice framework about how libraries are a bunch of Space, System, Collection, expertise & systems & how there are 3 main strategies
  • Customer relationship managemet/ Engagement
  • Product Innovation
  • Infrastructure 
He believes that increasingly libraries will focus on engagement & in-particular infrastructure will start becoming externalised (think all the in-the-cloud developments with Discovery systems and web scale management systems), due to economies of scale & the fact that most of what libraries do now are not very distinctive or unique anyway & we are all better off sharing the load or outsourcing it somehow and focus our efforts on unique value added activities that are specific to our community. 

There are many many interesting ideas there, from

  • the idea that libraries traditionally have done "outside-in" - bringing in licensed content to be discoverable to our users but are now taking on roles of "inside-out" - getting content generated by our institutional repositories, open access journals and other digital content to be discover-able by outside users  
  • Managing down of print collections
  • The different types of externalization
  • If the library wants to be seen as expert, then its expertise has to be visible 
  • Putting our services and data into the researcher flow that is being increasingly disrupted by network scale innovations (Mendeley is one example)

Seriously I am not doing justice to the breath & depth of his ideas. More recently he released, Thirteen Ways of Looking at Libraries, Discovery, and the Catalog: Scale, Workflow, Attention which is a more detailed article but using examples from an impressive array of diverse services.

I admit to a slight bias of sometimes shying a way from reading his blog posts because for whatever reason (inability to think at a high level of abstraction? use of jargon?) , I have to work hard to absorb his ideas & concepts , but they are always worth the effort and I always go away feeling smarter rather than going "that's obvious".



This was technically released year end 2011, but is still a great read so I included it here. It's pretty much in the same vein as #1 , but perhaps in a more digestible form (less jargon) and with clearer concrete action plans.

It introduces the concept of the The Four Horsemen of the Library Apocalypse
  • Unsustainable Costs (Serials!) 
  • Viable alternative (Google!)
  • Declining usage 
  • New Patron demands. 
I believe, Roy Tennant gave a keynote "The Once and Future Academic Library" around the same idea.

I also particularly like a slide showing "Local Physical Distribution Models Displaced
by Remote and Fully Digital Approaches".
  • Border was destroyed by Amazon
  • Blockbuster was destroyed by Netflix
  • Tower-records was destroyed by itunes
  • Libraries? was destroyed by Google?
I can think of others.. Kodak destroyed by flickr/instragram etc. 

The slides are simple, elegant and yet pulls together in my view some of the best thinking on the trends facing academic libraries & the most promising strategies to manage the migration. I highly recommend them.




A lot of the ideas in #1 and #2 are arguably not new & have been slowly creeping up on us since about 5 years ago.

Steven Bell, ACRL President 2012-2013, originator of the blended librarian concept addresses in this interview a newer disruptive force - the rise of MOOCs and the possible impact on academic libraries.


4. Moving towards an open access future: the role of academic libraries by Si├ón Harris/SAGE

So this year we celebrated the 10 year anniversary of the Budapest Open Access Initiative. I am going to be bold & stick my head out and say that I have a gut feeling that 2013 is going to be a turning point for Open Access and academic libraries around the world will have to start to pay attention to this, if they haven't already as more researchers start to ask about this.

I could be wrong of course, but just in case I am not, I have been researching on Open Access/Institutional repositories for a while, just in case.....



5. Think Like a Start-Up by Brian Mathews

Brian Mathews, Associate Dean for Learning & Outreach at Virginia Tech's University Libraries released a interesting white paper entitled Think Like a Start-Up .



Compared to the rest of the entries on this list, this one is more micro in scale focused on encouraging Entrepreneurship & innovation, but still one of the more interesting things I have read all year.


6. Others 

There are of course plenty of "future of academic library"/"scenario planning"/top trends type views out there. Here are some others that I liked

     
Common threads?

Of course I know this listing is limited by my interests and ignorance - for example I know practically nothing about linked-data (though I should probably read more by Karen Coyle), though some of the trends mentioned here on shifting the focus from institution scale to web scale, leveraging research gateway, making data work harder, pretty much implies linked data or similar.

Makerspaces is another hot topic in 2012 while interesting probably is too micro to be included though it fits nicely into strategies on engagement of community and space related strategies.

Still, I think you can see a common ground emerge if you look at the material presented above.

I know there's a bit of irony in seeking common ground, when perhaps one of the themes is that libraries should specialize in something unique, distinctive of value to their community or perhaps bring/connect the local community to the world and try to externalize everything else that is routine.

I am not saying there should be "one true academic library future", but it seems to me at least for certain types of academic libraries of a certain scale, including my own, the one lesson or principle that applies is this, "your library is not the sun among which researchers orbit" (apologies to K.G. Schneider - the user is not broken meme but I think not just OPAC but all library services & resources apply).

I don't believe we should give up though and totally give up the discovery role & just focus on delivery but I believe we should also continually work to embed ourselves in the flow of our users whether it be through supplying our holdings to services like Google Scholar, Mendeley, etc (see 6 Library related services online that use your library holdings), while working to showcase our expertise & spaces to attract & engage our community.

Collection wise, we should focus on building up our specialised collection which are what makes each library distinctive and ensure they are preserved if not digitized, though I understand the pressures of catering to day to day needs (think redspot books!) makes it hard to focus on the long term.

I guess that's why Patron driven acquisition just makes a lot of sense for everything else like books published by major publishers.

I just came across the fascinating idea of Wilkin profiles which visualize books from libraries in terms of how rare they are.

So a library with a left-leaning wilkin profile, would have a lot more unique collections that are held by only them or few other libraries, while a right-leaning library would have fewer unique collections and more common collections everyone has. Should we all become more left-leaning? (I see problems if we all go that way!)

Infrastructure wise it makes a lot of sense to externalize but there are many dangers to such an approach, as it means giving up a degree of control. Do we dare to move our all holdings to in the cloud without a local copy is perhaps the most trite example. I guess it depends on the type of externalization.

Hope you enjoyed this roundup, please add in the comments other great insightful posts you have read this year.







Sunday, December 9, 2012

Playing devil's advocate. Why you shouldn't implement a web scale discovery service.

I have been studying, thinking and posting about web scale discovery since 2011 and my institution is currently days away from pushing it out as a default search.

In many ways, this has been one of the most technically challenging library projects I have been involved so far due to its far reaching effects, affecting everything from IT, cataloguing, e-resource management and information literacy. However, there are times when I wonder is all this time and effort spent by us on implementing web scale discovery really worth it?

Or have I spend so much time and effort on it that to avoid cognitive dissonance I am totally blind to the problems? So I am going to play devil's advocate in this post, and put up what I think are the strongest reasons for NOT implementing a web scale discovery service.

For balance, I am going to try to follow each one with a rebuttal.


1. Most discovery happens offsite, users are just doing known item searches on your library site, so you don't need a discovery service.

The often quoted OCLC report found that 0% of users started their search from the library website. Of course, many do eventually come to the library site to search but it isn't a stretch to think by then they are looking not to discover new items, but rather to figure out a way to obtain the item they already discovered offsite.

Whether be it Google, Google scholar, PubMed, PubGet, Google books, reading lists, Amazon, Mendeley the battle is already lost and our users look to these sources to find suitable, relevant items before coming back to our library sites to look for a copy.

At ILI2012, University of Illinois at Urbana-Champaign stated that users of their Ezsearch (a very impressive advanced federated search system that is for all intents and purposes on par with Summon and services in its class), did known item searches for almost half of all searches (49.4%) 

I haven't managed to find any other analysis of the percentage of known item searches for discovery systems though I remember another talk I attended in ALA 2011 throwing around 30-40% (ours seem to be around 40% but we haven't fully launched).

A interesting question I didn't have time to research was also whether within systems there was a increasing trend of known item searches as discovery shifts offsite.

BTW This line of thought isn't original, in a recent presentation entitled "Thinking the Unthinkable: A Library without a Catalogue", the speaker argued that "university libraries are losing their roles in the discovery of scientific information, instead they should focus on delivery".





It's a fascinating, thought provoking talk, where the academic library at Utrecht University which has clearly abundant resources  (they developed their own federated search -Omega in the 2001!) decided against implementing a new discovery tool and not only that, they planned to support discovery offsite.

See also “Libraries need to admit that we suck at search and get over it

I must admit, I am not quite sure if I understand correctly the plans for their webopac, whether they intend to retire it or not. I can see how one could rely on Google scholar as a discovery tool given the existence of the library links programme, but what about books? (or are they going to rely on worldcat for books?)

In any case, they seem to have recognised the harsh reality that people no longer go to libraries for discovery, but rather use library search to just to check if a item is available.

If that's the case, why do we need expensive costly web-discovery systems? She systematically run through the objections to relying on external systems like Google Scholar which we don't control.

My view is, if users are just going for known item searches, all we need are webopacs, after all if there's one thing webopacs are good at it is known item searches (or at least those with enough citation data!).

Arguably, web scale discovery systems by blending in newspaper articles, journal articles etc make it harder to find a known item. One of the things that surprised me most when I started studying and trying web scale discovery systems was when l read librarians moaning about how known item searches was surprisingly difficult for Summon and cousins.

I was surprised, wasn't this a solved problem?

Trying myself, I noticed for example, searches for database names, journal names did not always surface the record from the catalogue (if say it matched only part title) as the first result in discovery systems. Books were also problematic if users decided to "help" the system by entering both title and author (book reviews mention the author name several times hence the higher ranking? Just speculating.) and often it would surface book reviews or books reviews classed as journal articles in Summon.

So you ended up with equal or worse results for 40%+ searches.

Of course there are various ways around it, from the new database recommender in Summon,  excluding book reviews by default, to bento style boxes and maybe tweaking the ranking algorithm to further weight on matching title and books, journals and databases.

Though I am not expert enough to know if a system designed to support both topic searches and known item searches can match one solely or mostly designed for known item searching.

But even if this is licked you just maintained parity for 40-50% of searches that would have worked perfectly in Webopac, so what's the point?



Rebuttal 1

First off, while it's true that currently web scale discovery may have some issues with known item searches for books, databases and journals, to say that the results are equal or worse for 40-50% misses the point that of the 40-50% a large percentage are known article title searches . 

These searches would have never worked in webopacs if you directly entered the article title and are great time savers! Granted such article title searches are not granted to work and may fail if the article isn't indexed but they work sufficiently often (roughly at least 90% of the time for most academic library collections) to not to worry about it.

And conveniently left out was the fact that before web scale discovery, 50% of user searches, many of those would be a complicated search that found nothing in the webopac.

Many of these 50% of user searches are now infinitely better because Summon and similar systems cover journal articles and often full-text of books.

With all due respect to the speaker from Utrecht University, saying that libraries should focus on delivery only is a very defeatist attitude to take. Summon and other web scale discovery systems may not be winning back all our users, but enough of them to make it a fight.


2. Web scale discovery don't offer precise enough searching for advanced users and for unskilled users too many results just confuses them anyway.

Let's face it. Web scale discovery relevancy ranking isn't really up to the snuff at least when compared to Google.

In a fascinating study, a head-to-head test between Summon and Google Scholar was done for the first 25 results for some of the most common searches in one college's instance of Summon.

The results were stripped off of identifying data, so one couldn't tell where the results were from and then given to a panel of librarians to judge. In this independent blind test, Google scholar trounced Summon in everything from relevancy to currency AND reliability (how scholarly each result was)!

Ratings were given from 1-5 for each of the 3 factors.

You have to read the study yourself (unfortunately not free) but Summon outscored Google Scholar on relevancy for just two searches (overall mean Google scholar was higher by 0.64), but I guess they are Google, they are masters in relevancy ranking so it isn't surprising right?

A greater surprise is Summon also scored lower than Google Scholar on reliability by a mean of 0.85 points. In case you are wondering Summon had exclude newspapers option on. So much for fearing Google Scholar may not have strict standards on what counts as scholarly.

How about currency? Google Scholar wins out again by 0.52.

For what-ever reason, Web scale discovery relevancy systems as of yet are still unable to properly rank the hundreds of millions of content in the index at least not with the skill Google Scholar does it.

See also the already mentioned “Libraries need to admit that we suck at search and get over it

Librarians often moan about how web scale discovery systems tend to lack the more powerful search features found in traditional library databases.

Arguably, Google scholar can get away without precision tools to slice and dice the results due to their powerful relevancy system but library web scale discovery systems are hardly in the same league for relevancy ranking.

Compared to a more specialised database, Web scale discovery suffers from a double whammy in terms of difficulty to getting precise , controlled result sets

First off, they have to cater to the lowest common denominator so they lack the more powerful precision search features in specialised databases.

And to make matters worse, the lack of such tools hurt web discovery even more because they have to work harder to differentiate between the same term being used across disciplines. Eg Data migration could refer to either a computer science term or a social science term.

That's the reason why even many (but not all) defenders of web scale discovery will admit, web scale discovery isn't for advanced users who should search subject specific databases.

So maybe it's true, web scale discovery is for less skilled users who don't need to do through and precise searches and just need an article or two. But if that's the case, why shouldn't they just use something academic search premier or Jstor to get an article or two?

They don't need to do through comprehensive searches anyway, so why are we asking them to search through super large index of results that may confuse them?

Either way, the conclusion is web scale discovery doesn't seem to suit the needs of either unskilled or skilled searchers!

New development

A very recent study possibly gives some support to the idea that beyond a certain point, having more results doesn't help as much.

At Johns Hopkins Libraries, they carried a very interesting test, very similar to the bingiton challenge and more recently the millionshort test, essentially they show results from two discovery services side to side.



The services tested were

  • EBSCO Discovery Service (EDS)
  • EBSCOhost 'Traditional' API (set up to search 40 EBSCO databases)
  • Ex Libris Primo
  • Serials Solutions Summon 
  • Scopus (Elsevier) 
Through the magic of APIs, they pipe in the results into a webpage with 2 set of results side by side and users are asked to express a preference, or decide that they can't choose between the two.

It's a well written study that considers various factors (what is displayed seemed to be critical) but the upshot is, with the exception of Scopus which seems to be much less preferred (statistically significance reached), none of the others were preferred over the others.

While Summon did the best overall in terms of raw wins, the results was not statistically significant. A somewhat bigger surprise is that the Ebscohost set of databases or EBSCOhost 'Traditional' API, came in 2nd & even bested the newer Ebsco Discovery Service, head to head.

Of course none of these are statistically significant, but it does show that while EDS definitely has more material (in the test EDS, Primo and Summon is set to include the whole index regardless of the library's holdings), beyond a certain point they hardly make a difference. Of course 40 Ebscohost databases is still a lot of information, but one wonders if even a smaller set say 10-20 would be sufficient to get a similar result.



Rebuttal 

First off it's not true that Google Scholar will definitely trounce all discovery services. In Paths of Discovery:  Comparing the Search effectiveness of EBSCO Discovery Service, 
Summon, Google Scholar, and Conventional Library Resources  , librarians independently judged articles selected by students from EDS to be far superior than Google Scholar and the rest.

Also I think the argument above about not needing a huge index for beginners fails because of the following fallacy - that a unskilled user searching for a few articles can definitely find them if they use academic search premier or equalvant. 

This misses the possibility that just because a user is unskilled it does not mean he won't be searching for a obscure topic where the best chance of success is to search the broadest database. Sure if he is searching for say "racial relations" pretty much every database will do, but try a more focused search like racial relations in country X, and you will quickly see the value of searching the broadest database.

In fact, an unskilled searcher who is just searching for 1 or 2 relevant articles will benefit a lot from searching a super broad web scale discovery system because he probably won't use the right terms to search, so the broadest index maximizes the chances of hitting on at least 1 or 2 results.

Similarly while it is true a researcher needs a way to do a controlled precise search to be sure he covered comprehensive results, this assumes the search he is doing has so many results he needs these tools.

But wouldn't you say that many researchers are doing very specialised searches where covering the broadest index is important as the problem they are getting is not too many results but no results?

I have seen dedicated postgraduate students who after years of digging across various sources do a quick search in Summon and are stunned to find 1 or 2 articles surfacing that are utterly relevant to what they are doing but they missed it because they just happened not to search a source that covered it.

The Johns Hopkins Libraries study is interesting but the authors of the study says it best when they muse about the finding that practically all the services they are tested are equal and point out several possibilities for this finding.

"One obvious question is whether we have a finding of no user preference, of our users, collectively, thinking all the products are about equal -- or if our findings are simply inconclusive."

They suspect larger sample sizes might not necessarily help with statistical significance but..they suspect users just didn't use it enough (or didn't use real enough examples) to really tell if there was a difference.

"If used over time in production, some products may very well satisfy users better than others -- but when asked to express a preference for a small handful of searches in the artificial context of the experiment, users may not have the capability to adequately judge which products may be more helpful in actual use. I think this is quite possible, especially if users were not using their own current real research questions to test."

More intriguing they speculate (with some evidence to existing literature)

"Some users, especially beginner/undergraduate users, may simply be unconcerned with relevance of results, being satisfied by nearly any list of results. "

Or to put it another way undergraduates satisfice , and an article that is "just kinda what they want" is considered good enough, so beyond a certain point it doesn't matter.

This is followed by a interesting musing about whether if they don't care, whether we librarians should care!

Conclusion

Unlike some I am not certain that every academic library should rush out to implement a web scale discovery system regardless of finances.

But I find it hard to think it can be a bad idea. After all library after library that has implemented Summon and other web scale discovery services and all have reported substantial increases in usage of electronic resources and that is definitely what we are trying to do, reduce friction in accessing our resources isn't it?

This blog post was partly  if not mostly inspired by discussion among the community at Library Society of the World. (LSW)

BTW If you want to keep up with articles, blog posts, videos etc on web scale discovery, do consider subscribing to my custom magazine curated by me on Flipboard.

Share this!

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Related Posts Plugin for WordPress, Blogger...