You are currently browsing the category archive for the ‘Discovery systems’ category.

The end of 2015 and the start of 2016 seems to have delivered a number of interesting reports and presentations relevant to the library technology sphere.  So Ken Chad’s latest paper ‘Rethinking the Library Services Platform‘ picks up on the lack of interoperability between library systems as does the new BiblioCommons report on the public library sector ‘Essential Digital Infrastructure for Public Libraries in England‘ commenting that “In retail, digital platforms with modular design have enabled quickly-evolving omnichannel user experiences. In libraries, however, the reliance on monolithic, locally-installed library IT has deadened innovation”.

As ‘Rethinking the Library Services Platform‘ notes, in many ways the term ‘platform’ doesn’t really match the reality of the current generation of library systems.  They aren’t a platform in the same way as an operating system such as Windows or Android, they don’t operate in a way that third-parties can build applications to run on the platform.  Yes, they offer integration to financial, student and reference management systems but essentially the systems are the traditional library management system reimagined for the cloud.  Much of the changes are a consequence of what becomes possible with a cloud-based solution.  So their features are shared knowledge bases, with multi-tenanted applications shared by many users as opposed to local databases and locally installed applications.   The approach from the dwindling number of suppliers is to try to build as many products as possible to meet library needs.  Sometimes that is by developing these products in-house (e.g. Leganto) and sometimes by the acquisition of companies with products that can be brought into the supplier’s eco-system.  The acquisition model is exactly the same as that practised by both traditional and new technology companies as a way of building their reach.  I’m starting to view the platform as much more in line with the approach that a company like Google will take with a broad range of products aiming to secure customer loyalty to their ecosystem rather than that of another company.  So it may not be so surprising that technology innovation, which to my mind seems largely to be driven by vendors innovating to deliver to what they see as being library needs and shaped by what vendors think they see as an opportunity, isn’t delivering the sort of platform that is suggested.  As Ken notes, Jisc’s LMS Change work discussed back in 2012 the sort of loosely-coupled, library systems component approach giving libraries to ability to integrate different elements to give the best fit to their needs from a range of options.  But in my view options have very much narrowed since 2012/13.

The BiblioCommons report I find particularly interesting as it includes within it an assessment of how the format silos between print and electronic lead to a poor experience for users, in this case how ebook access simply doesn’t integrate into OPACs, with applications such as Overdrive being used that are separate to the OPAC, e.g. Buckinghamshire library services ebooks platform, and their library catalogue are typical.  Few if any public libraries will have invested in the class of discovery systems now common in Higher Education (and essentially being proposed in this report), but even with discovery systems the integration of ebooks isn’t as seamless as we’d want, with users ending up in a variety of different platforms with their own interfaces and restrictions on what can be done with the ebook.  In some ways though, the public library ebook offer, that does offer some integration with the consumer ebook world of Kindle ebooks, is better then the HE world of ebooks, even if the integration through discovery platforms in HE is better.  What did intrigue me about the proposal from the BiblioCommons report is the plan to build some form of middleware system using ‘shared data standards and APIs’ and that leads to wondering whether that this can be part of the impetus for changing the way that library technology interoperates.  The plan includes in section 10.3 the proposal to  ‘deliver middleware, aggregation services and an initial complement of modular applications as a foundation for the ecosystem, to provide a viable pathway from the status quo towards open alternatives‘ so maybe this might start to make that sort of component-based platform and eco-system a reality.

Discovery is the challenge that Oxford’s ‘Resource Discovery @ The University of Oxford‘ report is tackling.   The report by consultancy, Athenaeum 21, looks at discovery from the perspective of a world-leading research institution, with large collections of digital content and looks at connecting not just resources but researchers with visualisation tools of research networks, advanced search tools such as elastic search.  The recommendations include activities described as ‘Mapping the landscape of things’, Mapping the landscape of people’, and ‘Supporting researchers established practices’.  In some ways the problems being described echo the challenge faced in public libraries of finding better ways to connect users with content but on a different scale and includes other cultural sector institutions such as museums.

I also noticed a presentation from Keith Webster from Carnegie Mellon University ‘Leading the library of the future: W(h)ither technical services?  This slidedeck takes you through a great summary of where academic libraries are now and the challenges they face with open access, pressure on library budgets and changes in scholarly practice.   In a wide-ranging presention it covers the changes that led to the demise of chains like Borders and Tower records and sets the library into the context of changing models of media consumption.    Of particular interest to me were the later slides about areas for development that, like the other reports, had improving discovery as part of the challenge.   The slides clearly articulate the need for innovation as an essential element of work in libraries (measured for example as a % of time spent compared with routine activities) and also of the value of metrics around impact, something of particular interest in our current library data project.

Four different reports and across some different types of libraries and cultural institutions but all of which seem to me to be grappling with one issue – how do libraries reinvent themselves to maintain a role in the lives of their users when their traditional role is being erroded or when other challengers are out-competing with libraries – whether through improving discovery or by changing to stay relevant or by doing something different that will be valued by users.

 

 

 

 

 

SunsetIn the early usability tests we ran for the discovery system we implemented earlier in the year one of the aspects we looked at were the search facets.   Included amongst the facets is a feature to let users limit their search by a date range.  So that sounds reasonably straight-forward, filter your results by the publication date of the resource, narrowing your results down by putting in a range of dates.  But one thing that emerged during the testing is that there’s a big assumption underlying this concept.  During the testing a user tried to use the date range to restrict results to journals for the current year and was a little baffled why the search system didn’t work as they expected.  Their expectation was that by putting in 2015 it would show them journals in that subject where we had issues for the current year.  But the system didn’t know that issues that were continuing and therefore had a date range that was open-ended were available for 2015 as the metadata didn’t include the current year, just a start date for the subscription period.  So consequently the system didn’t ‘know’ that the journal was available for the current year.  And that exposed for me the gulf that exists between user and library understanding and how our metadata and systems don’t seem to match user expectations.  So that usability testing session came to mind when reading the following blog post about linked data.

I would really like my software to tell the user if we have this specific article in a bound print volume of the Journal of Doing Things, exactly which of our location(s) that bound volume is located at, and if it’s currently checked out (from the limited collections, such as off-site storage, we allow bound journal checkout).

My software can’t answer this question, because our records are insufficient. Why? Not all of our bound volumes are recorded at all, because when we transitioned to a new ILS over a decade ago, bound volume item records somehow didn’t make it. Even for bound volumes we have — or for summary of holdings information on bib/copy records — the holdings information (what volumes/issues are contained) are entered in one big string by human catalogers. This results in output that is understandable to a human reading it (at least one who can figure out what “v.251(1984:Jan./June)-v.255:no.8(1986)”  means). But while the information is theoretically input according to cataloging standards — changes in practice over the years, varying practice between libraries, human variation and error, lack of validation from the ILS to enforce the standards, and lack of clear guidance from standards in some areas, mean that the information is not recorded in a way that software can clearly and unambiguously understand it.  From https://bibwild.wordpress.com/2015/11/23/linked-data-caution/ the Bibliographic Wilderness blog

Processes that worked for library catalogues or librarians i.e. in this case the description v.251(1984:Jan./June)-v.255:no.8(1986) need translating for a non-librarian or a computer to understand what they mean.

It’s a good and interesting blog post and raises some important questions about why, despite the seemingly large number of identifiers in use in the library world (or maybe because) it is so difficult to pull together metadata and descriptions of material to consolidate versions together.   It’s an issue that causes issues across a range of work we try to do, from discovery systems, where we end up trying to normalise data from different systems to reduce the number of what seem to users to be duplicate entries to work around usage data, where trying to consolidate usage data of a particular article or journal becomes impossible where versions of that article are available from different providers, or from institutional repositories or from different URLs.

We’ve been running Primo as our new Library Search discovery system since the end of April so it’s been ‘live’ for just over four months.  Although it’s been a quieter time of year over the summer I thought it would be interesting to start to see what the analytics are saying about how Library Search is being used.

Internal click-throughs
Some analytics are provided by the supplier in the form of click-through statistics and there are some interesting figures that come out of those.  The majority of searches are ‘Basic searches’, some 85%.  Only about 11% of searches use Advanced search.  Advanced search isn’t offered against the Library Search box embedded into the home page of the library website but is offered next to the search box on the results page and on any subsequent search.  It’s probably slightly less than I might have expected as it seemed to be fairly frequently mentioned as being used regularly on our previous search tool.

About 17% of searches lead to users refining their search using the facets.  Refining the search using facets is something we are encouraging users to do, so that’s a figure we might want to see going up.  Interestingly only 13% navigated to the next page in a set of search results using the forward arrow, suggesting that users overwhelmingly expect to see what they want on the first page of results. (I’ve a slight suspicion about this figure as the interface presents links to pages 2-5 as well as the arrow – which goes to pages 6 onwards –  and I wonder if pages 2-5 are taken into account in the click-through figure).

Very few searches (0.5% of searches) led users to use the bX recommendations, despite this being in a prominent place on the page.  The ‘Did you mean’ prompt also seemed to have been used in 1% of searches.  The bookshelf feature ‘add to e-shelf’is used in about 2% of searches.

Web analytics
Browsers used pie chartLooking at web analytics shows that Chrome is the most popular browser, followed by Internet Exploer, Safari, and Firefox.

75% of traffic comes from Windows computers with 15% from Macintoshes.  There’s a similar amount of traffic from tablets to what we see on our main library website, with tablet traffic running at about 6.6% but mobile traffic is a bit lower at just under 4%.

Overall impressions
Devices using library search seem pretty much in line with traffic to other library websites.  There’s less mobile phone use but possibly that is because Primo isn’t particularly well-optimised for mobile devices and also maybe something to test with users whether they are all that interested in searching library discovery systems through mobile phones.

I’m not so surprised that basic search is used much more than advanced search.  It matches the expectations from the student research of a ‘google-like’ simple search box.  The data seems to suggest that users expect to find results that are relevant on page one and not go much further, something again to test with users ‘Are they getting what they want’.  Perhaps I’m not too surprised that the ‘recommender’ suggestions are not being used but it implies that having them at the top of the page might be taking up important space that could be used for something more useful to users.  Some interesting pointers about things to follow up in research and testing with users.

 

So we’re slowly emerging from our recent LMS project and a bit of time to stop and reflect, partly at least to get project documentation for lessons learned and suchlike written up and the project closed down.  We’ve moved from Voyager, SFX and EBSCO Discovery across to Alma and Primo.  We went from a project kick off meeting towards the end of September 2014 to being live on Alma at the end of January 2015 and live on Primo at the end of April.

So time for a few reflections about some of the things to think about from this implementation.  I’d worked out the other day that it has been the fifth LMS procurement/implementation process I’ve been involved with, and doing different roles and similar roles in each of them.  For this one I started as part of the project team but ended leading the implementation stage.

Reflection One
Tidy up your data before you start your implementation.  Particularly your bibliographic data but if you can other data too.  You might not be able to do so if you are on an older system as you might not have the tools to sort out some of the issues.  But the less rubbish you can take over to your nice new system the less sorting out you’ve got to do on the new system.  And when you are testing your initial conversion too much rubbish makes it harder to see the ‘wood for the trees’, in other words work out what are problems that you need to fix by changing the way the data converts and what is just a consequence of poor data. With bibliographic data the game has changed, you are now trying to match your data with a massive bibliographic knowledge base.

Reflection Two
It might be ideal to plan to go live with both LMS and Discovery at the same time but it’s hard to do.  The two streams often need the same technical resources at the same time.  Timescales are tight to get everything sorted in time.  We decided that we needed to give users more notice of the changes to the Discovery system and make sure there was a changeover period by running in parallel.

Reflection Three
You can move quickly.  We took about four months from the startup meeting to being live on Alma but it means that you have a very compressed schedule.  Suppliers have a well-rehearsed approach and project plan but it’s designed as a generic approach.  There’s some flexibility but it’s deliberately a generic tried-and-tested approach.  You have to be prepared to be flexible and push things through as quickly as possible.  There isn’t much time for lots of consultation about decisions, which leads to…

Reflection Four
As much as possible, get your decisions about changes in policies and new approaches made before you start.  Or at least make sure that the people on the project team can get decisions made quickly (or make them themselves) and can identify from the large numbers of documents, guidance and spreadsheets to work through, what the key decisions you need to make will be.

Reflection Five
Get the experts who know about each of the elements of your LMS/Discovery systems involved with the project team.  There’s a balance between having too many and too few people on your project team but you need people who know about your policies, processes, practices and workflows, your metadata (and about metadata in general in a lot of detail to configure normalisation, FRBR’ization etc etc), who know about your technology and how to configure authentication and CSS.  Your project team is vital to your chances of delivering.

Reflection Six
Think about your workflows and document them.  Reflect on them as you go through your training.  LMS workflows have some flexibility but you still end up going through the workflows used by the system.  Whatever workflows you start with you will no doubt end up changing or modifying them once you are live.

Reflection Seven
Training.  Documentation is good.  Training videos are useful and have the advantage of being able to be used whenever people have time.  But you still need a blended approach, staff can’t watch hours of videos, and you need to give people training about how your policies and practices will be implemented in the new LMS.  So be prepared to run face to face sessions for staff.

Reflection Eight
Regular software updates.  Alma gets monthly software updates.  Moving from a system that was relatively static we wondered about how disruptive it would be.  Advice from other Libraries was that it wasn’t a problem.  And it doesn’t seem to be.  There are new updated user guides and help in the system and the changes happen over the weekend when we aren’t using the LMS.

Reflection Nine
It’s Software as a Service so it’s all different.  I think we were used to Discovery being provided this way so that’s less of a change.  The LMS was run previously by our corporate IT department so in some senses it’s just moved from one provider to another.  We’ve a bit less control and flexibility to do stuff with it but OK, and on the other hand we’ve more powerful tools and APIs.

Refelection Ten
Analytics is good and a powerful tool but build up your knowledge and expertise to get the best out of it.  We’ve reduced our reports and do a smaller number than we’d thought we need.  Scheduled reports and widgets and dashboards are really useful and we’re pretty much scratching the surface of what we can do.  Access to the community reports that others have done is pretty useful especially when you are starting.

Refelection Eleven
Contacts with other users are really useful.  Sessions talking to other customers, User Group meetings and the mailing lists have all been really valuable.  An active user community is a vital asset for products not just the open source ones.

and finally, Reflection Twelve
We ran a separate strand to do some user research with students into what users wanted from library search.  This was really invaluable as it gave us evidence to help in the procurement stage, but particularly it helped shape the decisions made about how to setup Primo.  We’ve been able to say: this is what library users want and we have the evidence about it.  And that has been really important in being able to challenge thinking based on what us librarians think users want (or what we think they should want).

So, twelve reflections about the last few months.  Interesting, enlightening, enjoyable, frustrating at times, and tiring.  But worthwhile, achievable and something that is allowing us to move away from a set of mainly legacy systems, not well-integrated, not so easy to manage to a set of systems that are better integrated, have better tools and perhaps as important have a better platform to build from.

There seems to have been a flurry of activity around reading system systems in recent weeks.  There’s the regular series of announcements of new customers for Talis Aspire which seems to clearly be the market-leader in this class of systems but there’s also been two particular examples of the integration of reading list systems into Moodle.

Firstly, the University of Sussex have been talking about their integration of Aspire into Moodle.  Slides from their presentation at ALRG are available from their repository.  There is also a really good video that they’ve put together that shows how the integration works in practice.  The video shows how easy it seems to be to add a section from a reading list directly into a moodle course.  It looks like a great example of integration that seems mostly to have been done without using the Aspire API.   One question I’d have about the integration is whether it automatically updates if there are changes made to the reading list, but it looks like a really neat development.

The other reading list development comes from EBSCO with their Curriculum Builder LMS plugin for EBSCO Discovery.   There’s also a video for this showing an integration with moodle.   This development makes use of the IMS Learning Tools Interoperability standard (LTI) to achieve the integration.   The approach mainly seems to be looked at from the Discovery system with features to let you find content in EBSCO Discovery and then add it to a Reading List, rather than being a separate reading list builder system.  It’s interesting to see the tool being looked at from the perspective of a course creator developing a reading list and useful to have features such as notes for each item on a list.  What looks to be different from the Sussex approach is that when you go to the reading list from within Moodle you are being taken out of Moodle and don’t see the list of resources in-line in Moodle.

There’s a developing resource bank of information on Helibtech at http://helibtech.com/Reading_Resource+lists that is useful to keep an eye on developments in this area.

Liblink admin screen The approach we’ve been taking is with a system called Liblink (which incidentally was shortlisted this year for the Times Higher Education Leadership and Management awards for Departmental ICT Initiative of the Year).  Liblink developed out of a system created to manage dynamic content for our main library website, for pages like http://www.open.ac.uk/library/library-resources/statistics-sources

The concept was to pull resources from a central database that was being updated regularly with data from systems such as SFX and the library catalogue.  This ensured that the links were managed and that there was a single record for each resource.  It then became obvious that the system, with some development, could replace a clutch of different resource list and linking systems that had been adopted over the years and could be used as our primary tool to manage linking to resources.  The tool is designed to allow us to push out lists of resources using RSS so they can be consumed by our Moodle VLE, but the tool also offers different formats such as html, plain text and RIS.

 

 

 

 

I picked up over the weekend via the No Shelf Required blog that EBSCO Discovery usage data is now being added into Plum Analytics.    EBSCO’s press release talks about providing “researchers with a much more comprehensive view of the overall impact of a particular article”.   Plum Analytics have fairly recently been taken over by EBSCO (and here) so it’s not so surprising that they’d be looking at how EBSCO’s data could enhance the metrics available through Plum Analytics.

It’s interesting to see the different uses that activity data in this sphere can be put to.  There are examples of it being used to drive recommendations, such as hot articles, or Automated Contextual Research Assistance. LAMP is talking of using activity data for benchmarking purposes.  So you’re starting to see a clutch of services-being driven by activity data just as the like’s of Amazon drive so much of what appears on their sales site by data driven by customer activity.

Beadnell wadersFor a few months now we’ve been running a project to look at student needs from library search.  The idea behind the research is that  we know that students find library search tools to be difficult compared with Google, we know it’s a pain point.  But actually we don’t know in very much detail what it is about those tools that students find difficult, what features they really want to see in a library search tool, and what they don’t want.   So we’ve set about trying to understand more about their needs.  In this blog post I’m going to run through the approach that we are taking.  (In a later blog post hopefully I can cover some detail of the things that we are learning.)

Approach
Our overall approach is that we want to work alongside students (something that we’ve done before in our personalisation research) in a model that draws a lot of inspiration from a co-design approach. Instead of building something and then usability testing it with students at the end we want to involve students at a much earlier stage in the process so for example they can help to draw up the functional specification.

We’re fortunate in having a pool of 350 or so students who agreed to work with us for a few months on a student panel.  That means that we can invite students from the panel to take part in research or give us feedback on a small number of different activities.  Students don’t have to take part in a particular activity but being part of the panel means that they are generally pre-disposed to working with us.  So we’re getting a really good take-up of our invitations – I think that so far we had more than 30 students involved at various stages, so it gives us a good breadth of opinions from students studying  different subjects, at different study levels and with different skills and knowledge.

We’ve split the research into three different stagesDiscovery research stages: an initial stage that looked at different search scenarios and different tools; a second stage that drew out of the first phase some general features and tried them on students, then a third phase that creates a new search tool and then undertakes an iterative cycle of develop, test, develop, test and so on.  The diagram shows the sequence of the process.

The overall direction of the project is that we should have a better idea of student needs to inform the decisions we make about Discovery, about the search tools we might build or how we might setup the tools we use.

As with any research activities with students we worked with our student ethics panel to design the testing sessions and get approval for the research to take place.

Phase One
We identified six typical scenarios – (finding an article from a reference,  finding a newspaper article from a reference, searching for information on a particular subject, searching for articles on a particular topic, finding an ebook from a reference and finding the Oxford English Dictionary).   All the scenarios were drawn from activities that we ask students to do, so used the actual subjects and references that they are asked to find.  We identified eight different search tools to use in the testing  – our existing One stop search, the mobile search interface that we created during the MACON project, a beta search tool that we have on our library website, four different versions of search tools from other Universities and Google Scholar.  The tools had a mix of tabbed search, radio buttons, bento-box-style search results, chosen to introduce students to different approaches to search.

Because we are a distance learning institution, students aren’t on campus, so we set up a series of online interviews.  We were fortunate to be able to make use of the usability labs at our Institute of Educational Technology and used Teamviewer software for the online interviews.  In total we ran 18 separate sessions, with each one testing 3 scenarios in 3 different tools.  This gave us a good range of different students testing different scenarios on each of the tools.

Sessions were recorded and notes were taken so we were able to pick up on specific comments and feedback.  We also measured success rate and time taken to complete the task.  The features that students used were also recorded.  The research allowed us to see which tools students found easiest to use, which features they liked and used, and which tools didn’t work for certain scenarios.

Phase two
For the second phase we chose to concentrate on testing very specific elements of the search experience.  So for example, we looked at radio buttons and drop-down lists, and whether they should be for Author/Title/Keyword or Article/Journal title/library catalogue.  We also looked at the layout of results screens, and the display of facets, to ask students how they wanted to see date facets presented for example.Discovery search mockup

We wanted to carry out this research with some very plain wireframes to test individual features without the distraction of website designs confusing the picture.  We tend to use a wireframing tool called Balsamiq to create our wireframes rapidly and we ran through another sequence of testing, this time with a total of 9 students in a series of online interviews, again using teamviewer.

By using wireframing you can quickly create several versions of a search box or results page and put them in front of users.  It’s a good way of being able to narrow down the features that it is worth taking through to full-scale prototyping.  It’s much quicker than coding the feature and once you’ve identified the features that you want your developer to build you have a ready-made wireframe to act as a guide for the layout and features that need to be created.

Phase three
The last phase is our prototype building phase and involves taking all the research and distilling that into a set of functional requirements for our project developer to create.  In some of our projects we’ve shared the specification with students so they can agree which features they wanted to see, but with this project we had a good idea from the first two phases what features they wanted to see in a baseline search tool, so missed out that stage.  We did, however, split the functional requirements into two stages: a baseline set of requirements for the search box and the results; and then a section to capture the iterative requirements that would arise during the prototyping stage.  We aimed for a rolling-cycle of build and test although in practice we’ve setup sessions for when students are available and then gone with the latest version each time – getting students to test and refine the features and identify new features to build and test.  New features get identified and added to what is essential a product backlog (in scrum methodology/terminology).  A weekly team meeting prioritises the task for the developer to work on and we go through a rolling cycle of develop/test.

Reflections on the process
The process seems to have worked quite well.  We’ve had really good engagement from students and really good feedback that is helping us to tease out what features we need to have in any library search tool.  We’re about half way through phase three and are aiming to finish off the research for the end of July.  Our aim is to get the search tool up as a beta tool on the library website as the next step so a wider group of users can trial it.

Catching up this week with some of the things from last week’s UKSG conference so I’ve been viewing some of the presentations that have been put up on YouTube at https://www.youtube.com/user/UKSGLIVE   There were a few that were of particular interest, especially those covering the Discovery strand.

The one that really got my attention was from Simone Kortekaas from Utrecht University talking about their decision to move away from discovery by shutting down their own in-house developed search system and now looking at shutting down their WebOPAC.  The presentation is embedded below

I found it interesting to work through the process that they went through, from realising that most users were starting their search elsewhere than the library (mainly Google Scholar) and so deciding to focus on making it easier for users to access library content through that route, instead of trying to focus on getting users to come to the library, to a library search tool.  It recognises that other players (i.e. the big search engines) may do discovery better than libraries.

I think I’d agree with the principle that libraries need to be where there users are.  So providing holdings to Google Scholar so the ‘find it at your library’ feature works and providing bookmarklet tools (e.g. http://www.open.ac.uk/library/new-tools/live-tools) to help users login are all important things to do.  But whilst Google and Bing now seem to be better at finding academic content they still lack Google Scholar’s ‘Library links’ feature and the ability to upload your holdings that would allow you to offer the same form of ‘Find it at the…’ feature in those spaces.  And with Google Scholar you always worry about how ‘mainstream’ it is considered.

It is an interesting direction to take as a strategic decision and means that you need to carefully monitor (as Utrecht do) trends in user activity and in particular changes in those major search engines to make sure that your resources can be found through major search engines.   One consequence is that users are largely being taken to publisher websites to access the content and we know that the variations in these sites can cause users some difficulty/confusion.  But it’s an approach to think about as we see where the trend for discovery takes us.

 

For a little while I’ve been trying to find some ways of characterising the different generations or ages of library ‘search’ systems.  By library ‘search’ I’ve been thinking in terms of tools to find resources in libraries (search as a locating tool) as well as the more recent trend (athough online databases have been with us for a while) of search as a tool to find information. Library search ages

I wanted something that I could use as a comparison that picked up on some of the features of library search but compared them with some other domain that was reasonably well known.  Then I was listening to the radio the other day and there was some mention that it was the anniversary of the 45rpm single, and that made me wonder whether I could compare the generations of library search against the changes in formats in the music industry.

My attempt at trying to map them across is illustrated here.  There are some connections – both discovery systems and the likes of spotify streaming music systems are both cloud hosted.  Early printed music scores and the printed library catalogue such as the original British Museum library catalogue.  I’m not so sure about some of the stages in between though, certainly the direction for both has been to make library/music content more accessible.  But it seemed  like a worthwhile thing to think about and try it out. Maybe it works, maybe not.

 

Photograph of sparrows in a baarn doorway It was Lorcan Dempsey who I believe coined the term, ‘Full library discovery’ in a blog post last year. As a stage beyond ‘full collection discovery’, ‘full library discovery’ added in results drawn from LibGuides or library websites, alongside resource material from collections.  So for example a search for psychology might include psychology resources, as well as help materials for those pyschology resources and contact details about the subject librarian that covers psychology.  Stanford and Michigan are two examples of that approach, combining lists of relevant resources with website results.

Princeton’s new All search feature offers a similar approach, discussed in detail on their FAQ.  This combines results from their Books+, Articles+, Databases, Library Website and Library Guides into a ‘bento box’ style results display. Princeton all searchPrinceton’s approach is similar to the search from North Carolina State University who I think were about the first to come up with this style.

Although in most of these cases I suspect that the underlying systems are quite different the approach is very similar.  It has the advantage of being a ‘loosely-coupled’ approach where your search results page is drawn together in a ‘federated’ search method by pushing your search terms to several different systems, making use of APIs and then displaying the results in a dashboard-style layout.  It has the advantage that changes to any of the underlying systems can be accommodated relatively easily, yet the display to your users stays consistent.

For me the disadvantages for this are in the lack of any overriding relevancy ranking for the material and that it perpetuates the ‘silo’ing’ of content to an extent (Books, Articles, Databases etc) which is driven largely by the underlying silos of systems that we rely on to manage that content.  I’ve never been entirely convinced that users understand the distinction about what a ‘database’ might be.  But the approach is probably as good as we can get until we get to truly unified resource management and more control over relevancy ranking.

Going beyond ‘full library discovery’
But ‘full library discovery’ is still very much a ‘passive’ search tool, and by that I mean that it isn’t personalised or ‘active’.  At some stage to use those resources a student will be logging in to that system and that opens up an important question for me.  Once you know who the user is, ‘how far should you go to provide a personalised search experience?’.  You know who they are, so you could provide recommendations based on what other students studying their course have looked at (or borrowed), you might even stray into ‘learning analytics’ territory and know what the resources were that the highest achieving students looked at.

You might know what resources are on the reading list for the course that student is studying – so do you search those resources first and offer those up as they might be most relevant?  You might even know what stage a student has got to in their studies and know what assignment they have to do, and what resources they need to be looking at.  Do you ‘push’ those to a student?

How far do you go in assembling a profile of what might be ‘recommended’ for a course, module or assignment, what other students on the cohort might be looking at, or looked at the last time this course ran?  Do you look at students previous search behaviour?  How much of this might you do to build and then search some form of ‘knowledge base’ with the aim of surfacing material that is likely to be of most relevance to a student.  Search for psychology on NCSU’s Search All search box gives you the top three articles out of 2,543,911 articles in Summon, and likely behaviour is not to look much beyond the first page of results.  So should we be making sure that they are likely to be the most relevant ones?

But, then there’s serendipity, there’s finding the different things that you haven’t looked for before, or read before, because they are new or different.  One of the issues with recommendations is the tendancy for them to be circular, ‘What gets recommended gets read’ to corrupt the performance indicator mantra.  So how far do you go?  ‘Mind reading search’ anyone?

Twitter posts

Categories

Calendar

May 2017
M T W T F S S
« Mar    
1234567
891011121314
15161718192021
22232425262728
293031  

Creative Commons License