Photograph of sparrows in a baarn doorway It was Lorcan Dempsey who I believe coined the term, ‘Full library discovery’ in a blog post last year. As a stage beyond ‘full collection discovery’, ‘full library discovery’ added in results drawn from LibGuides or library websites, alongside resource material from collections.  So for example a search for psychology might include psychology resources, as well as help materials for those pyschology resources and contact details about the subject librarian that covers psychology.  Stanford and Michigan are two examples of that approach, combining lists of relevant resources with website results.

Princeton’s new All search feature offers a similar approach, discussed in detail on their FAQ.  This combines results from their Books+, Articles+, Databases, Library Website and Library Guides into a ‘bento box’ style results display. Princeton all searchPrinceton’s approach is similar to the search from North Carolina State University who I think were about the first to come up with this style.

Although in most of these cases I suspect that the underlying systems are quite different the approach is very similar.  It has the advantage of being a ‘loosely-coupled’ approach where your search results page is drawn together in a ‘federated’ search method by pushing your search terms to several different systems, making use of APIs and then displaying the results in a dashboard-style layout.  It has the advantage that changes to any of the underlying systems can be accommodated relatively easily, yet the display to your users stays consistent.

For me the disadvantages for this are in the lack of any overriding relevancy ranking for the material and that it perpetuates the ‘silo’ing’ of content to an extent (Books, Articles, Databases etc) which is driven largely by the underlying silos of systems that we rely on to manage that content.  I’ve never been entirely convinced that users understand the distinction about what a ‘database’ might be.  But the approach is probably as good as we can get until we get to truly unified resource management and more control over relevancy ranking.

Going beyond ‘full library discovery’
But ‘full library discovery’ is still very much a ‘passive’ search tool, and by that I mean that it isn’t personalised or ‘active’.  At some stage to use those resources a student will be logging in to that system and that opens up an important question for me.  Once you know who the user is, ‘how far should you go to provide a personalised search experience?’.  You know who they are, so you could provide recommendations based on what other students studying their course have looked at (or borrowed), you might even stray into ‘learning analytics’ territory and know what the resources were that the highest achieving students looked at.

You might know what resources are on the reading list for the course that student is studying – so do you search those resources first and offer those up as they might be most relevant?  You might even know what stage a student has got to in their studies and know what assignment they have to do, and what resources they need to be looking at.  Do you ‘push’ those to a student?

How far do you go in assembling a profile of what might be ‘recommended’ for a course, module or assignment, what other students on the cohort might be looking at, or looked at the last time this course ran?  Do you look at students previous search behaviour?  How much of this might you do to build and then search some form of ‘knowledge base’ with the aim of surfacing material that is likely to be of most relevance to a student.  Search for psychology on NCSU’s Search All search box gives you the top three articles out of 2,543,911 articles in Summon, and likely behaviour is not to look much beyond the first page of results.  So should we be making sure that they are likely to be the most relevant ones?

But, then there’s serendipity, there’s finding the different things that you haven’t looked for before, or read before, because they are new or different.  One of the issues with recommendations is the tendancy for them to be circular, ‘What gets recommended gets read’ to corrupt the performance indicator mantra.  So how far do you go?  ‘Mind reading search’ anyone?