If you feel the same, here are a mix of links I came across recently on the topic that might be of interest
8 surprising things I learnt about Google Scholar, raced to the top 20 all time read blog posts in just 3 weeks showing intense interest in this subject.
As such, the Number of Scholarly Documents on the Public Web is a fascinating paper that attempts to estimate the number of Scholarly documents on the public web using the capture/recapture method and in particular it gives you a figure for the number of papers in Google Scholar.
This is quite a achievement, since Google refuses to give this information.
It look me a while to wrap my head around the idea, but essentially it
- It defines number of Scholarly documents on the web as the sum of the papers in Google Scholar (GS) and Microsoft Academic Search (MAS)
- It takes the stated number of papers in MAS to be a bit below 50 million.
- It calculates the amount of overlap in papers found in both GS and MAS. This overlap needs to be calculated via sampling of course.
- The overlap is calculated using papers that cite 150 selected papers.
- Using the Lincoln–Petersen method, the overlap of papers found and the given value of about 50 million papers in MAS , one can estimate the number of papers in Google Scholar and hence the total sum of papers on the public web. (You may have to take some time to understand this last step, it took me a while for sure)
For more see also How many academic documents are visible and freely available on the Web? which summarises the paper, and assesses the strengths and weaknesses of the methodology employed in the paper.
- Google Scholar has estimated 99.3 million English Language papers and in total there are about 114 million papers on the web (where web is defined as Google Scholar + MAS)
- Roughly 24% of papers are free online
- Pubmed - 20-30 million - the go to source for medical and life sciences area.
- Scopus - 53 million - mostly articles/conference proceedings but now include some book and book chapters. This is one of the biggest traditional library A&I databases, it's main competitor Web of Science is roughly at the same level but with more historical data , fewer titles indexed.
- Base - 62 million -drawn from open access institutional repositories. Mostly but not 100% open access items and may include non-article times
- CrossRef metadata Search - 67 million - Indexed dois - may include book or book chapters.
Are there indexes that are comparable to Google Scholar's roughly 100 million? Basically the library webscale discovery services are the only ones at that level
- Summon - 108 million - Scholarly material facet on + "Add beyond library collection" + authenticated = including restricted A&I records from Scopus, Web of Science and more. (Your instance of Summon might have more or less depending on A&I subscribed and size of catalogue, Institutional repositories).
- Worldcat - 2.1 billion holdings of which 148 million are peer reviewed, 203 million articles [as of Nov 2013]
I think the fact that web scale discovery services are producing results in the same scale >100 million suggests that Google Scholar figure estimated is in the right ballpark.
Boolean versus ranked retrieval - clarified thoughts
My last blog post Why Nested Boolean search statements may not work as well as they did was pretty popular but what I didn't realise that I was implicitly saying that relevance ranking of documents retrieved using Boolean operators did not generally work well.
This was pointed out by Jonas
This was pointed out by Jonas
@f_renaville @aarontay @SaraDecoster @ULgLibrary It's two competing paradigms: boolean set retrieval vs ranked retrieval. Can we go back?
— Jonas Fransson (@jotifr) July 16, 2014
I tweeted back asking why we couldn't have good ranked retrieval on documents retrieved using Boolean operators and he replied that he thinks it's based two different mindsets and one should either "trust relevance or created limited sets."
On the opposite end, Dave Pattern of Huddersfield reminded me that Summon's relevancy ranking was based on Open Source Lucene software with some amount of tweaking. You can find some details but essentially it is designed to combine Boolean with Vector Space models etc aka it is designed or can do Boolean + ranked retrieval.
After reading though some documentation and the excellent Boolean versus ranked querying for biomedical systematic reviews, I realized my thinking on this topic was somewhat unclear.
As a librarian, I have always assumed it makes too much sense to (1) Pull out possibly relevant articles using Boolean Operators (2) Rank them using various techniques from classic tf-idf factors to other more modern techniques like link popularity etc.
I knew of course, there were 2 paradigms, that the classic Boolean set retrieval assumed every result was "relevant" and did not bother with ranking beyond sorting by date etc. But it still seemed odd to me not to try to at least to add ranking. What's the harm right?
The flip side was, what is ranked retrieval by itself? If one entered SINGAPORE HISTORICAL BUILDINGS ARCHITECTURE, it would still be ranking all documents that had all 4 terms right?(maybe with stemming) or wasn't it really still Boolean with ranking?
The key I was missing which now seemed obvious is that for ranked retrieval paradigms not every search term in the query has to be matched.
I know those knowledgeable in information retrieval reading this might think this be obvious and I am dense for not realizing this. I guess I did know this except I am a librarian, I am so trapped into Boolean thinking that I assume implicit AND is the rule.
In fact, we like to talk about how Google and some web searches do "Soft AND", and kick up a fuss when they might sometimes drop off one or more search terms. But in ranked retrieval that's what uou do, you throw in a "bag of words" (could be a whole paragraph of words), the ranking algorithms tries to do the best it can but the documents it fulls up may not have all the words in the query.
Boolean versus ranked querying for biomedical systematic reviews is particularly interesting paper, showing how different search algorithms ranging from straight out Boolean to ranked retrieval techniques that involve throwing in Title,abstracts as well as hybrid techniques that involve combining Boolean with Ranked retrieval techniques fare in term of retrieving clinical studies for systematic reviews.
It's a amazing paper, with different metrics and good explaintion of systematic reviews if you are unfamiliar. Particularly interesting they compare Boolean Lucene results which I think give you a hint on how Summon might fair.
Large search index like Google Scholar, discovery service flatten knowledge but is that a good thing?
Library Top Trends - Personally tuned discovery layersKen Varnum at the recently concluded LITA Top Technology Trends Sessions certainly thinks that what is missing in current Library discovery services is the ability for librarians to provide personally tuned discovery layers for local use.
He would certainly think that there is value in librarians, slicing the collections into customized streams of knowledge to suit local conditions. You can jump to his section on this trend here. Also Roger Schonfeld's
section on Anticipatory discovery for current awareness of new publications is interesting as well.
It sounds like a great idea, since Summon and Ebscohost discovery layers currently provide hardcoded discipline sets and I can imagine eventually been able to create subject sets based on collections at the database and/or at the journal title levels (shades of the old federated search days or librarians creating google custom search engines eg one covering NGO Sites or Jurn (open access in humanities)).
At the even more granular level, I suppose one could also pull from reading lists etc.
Unlike Ken though I am not 100% convinced though it would just take "a little bit of work" to make this worth while or at least better than the hardcoded discipline sets.
NISO Publishes Recommended Practice on Promoting Transparency in Library Discovery Services
NISO RP-19-2014, Open Discovery Initiative: Promoting Transparency in Discovery [PDF] was just published.
Somewhat related is the older NFAIS Recommended practices on Discovery Services [PDF]
I've gone through it as well as EBSCO supports recommendations of ODI press release and I am still digesting the implications, but clearly there is some disagreement about handling of A&I resources (not that shocking).
It is a duplicate of the Mendeley Group "Libraries & [Web-Scale] Discovery Tools.
Ebsco has launched a blog "Discovery Pulse" with many interesting posts. Some tidbits
Ebsco Discovery Layer related news
Summon Integrates Flow research management tool.It was announced that in July, Summon will integrate with Proquest Flow, their new cloud based reference management tool.
I have very little information about this and how overt the integration will be. But given that Mendeley was acquired by Elsevier, Papers by Springer, it's no wonder that Proquest wants to get into the game as well.
It's all about trying to get into the researcher's workflow and unfortunately as increasingly "discovery happens elsewhere", so it would be smart to focus on reference management an area where currently the likes of Google seem to be ignoring (though moves like Scholar Library where one can add citations found in Google Scholar to your own personal library may say otherwise).
Mendeley for certain has shown that reference management is a very powerful place to start to get a digital foothold.
While it's still early days, currently Flow seems to have pretty much the standard features one sees in most modern reference managers eg. Free up to 2GB storage, support of Citation Style Language (CSL), capabilities for collaboration etc. I don't see any distinguishing features or unique angles yet.
Here's a comparison in terms of storage space for the major competitors such as Mendeley.
The webinar I attended on it (sorry don't have link to recording) suggests Proquest has big plans for Flow, beyond a reference manager. It will aim to support the whole research cycle, and I think this includes support as a staging ground for publication (submission to PQDT??), as well as support of prepub works (posting to Institutional or Subject repositories?).
It will be interesting to see if Proquest will try to leverage it's other assets such as Summon to support Flow. Eg. Would Proquest tie recommender services drawn from Summon usage into it?
Currently you can turn off Flow from Summon without much ill effects and it seems some libraries have done so because it may take time to evaluate and prepare staff to support this, but it remains to see if in the long run , if Flow might just have too many features and value to be turned off.
BTW If you want to keep up with articles, blog posts, videos etc on web scale discovery, do consider subscribing to my custom magazine curated by me on Flipboard (currently over 1,200 readers) or looking at the bibliography on web scale discovery services)