You are currently browsing the category archive for the ‘Discovery systems’ category.

I’ve noticed recently when searching Google on an ipad that I’m seeing a different results display to the standard desktop display.  Screenshot of Google search interface on ipad portrait orientationScreenshot of Google search interface on ipad - landscape orientation I’m now seeing the results split up into a set of boxes.  So there’s a box at the top containing paid advertising, followed by a box with three results from the web, followed by a box with a single result from news and so on.

In landscape orientation you also get a related searches box on the right of the screen.  When you turn to a portrait view the related searches drop to the bottom.  At the foot of the page is a next button that takes you to more results including images.  On this second screen the related searches have dropped to the bottom and have been replaced by more advertising.

Some of the boxes have a ‘More’ link, for news and images for example.  When you go on to pages three and four you are into a fairly standard google web list but still placed in a box.  I’m not sure when Google started doing this or if this is a feature that is just being tested for mobile devices.   Not everyone seems to see it on ipads so I’d be interested to know under what circumstances you get to see this approach.

It is very reminiscent of the ‘bento box’ type approach, pulling results from different places and that’s something that we’ve been trying.  It’s not dissimilar to NCSU’s approach in terms of showing results from different types of content. e.g.  Screenshot of NCSU library search resultshttp://www.lib.ncsu.edu/search/?q=psychology

I think I’m quite surprised to find Google looking at this route.  For libraries we are looking at this route because it is a way to bring results together from several different systems.  Those systems are often the front-end of the systems that are used to manage different types of content and we often seem to struggle to join up all the different types of content into one integrated search solution.  Google have come to this from a very different place in that they have their content organised by themselves in what you would presume is a consistent way.  But still feel the need to be able to highlight content of different types (news, videos, images) to people.

But I think the difference is in the types of things that are being pulled out here.  You can see from NCSU that a typical list of different ‘stuff’ for libraries is Articles, Databases, Books & Media, Journals, Library website.  Yet for Google it is news, videos, images, maps, essentially quite high-level format concepts.  And I’m starting to think that it is one of the real problems for libraries that we have put ourselves in a position where articles and journals are somehow seen to be two different and separate things, when in reality one is just the packaging of several of the others together.

Discovery
Interesting today to read Lorcan Dempsey’s latest blogpost on ‘Full library discovery’ noting trends to include a wider range of content either into the local indexes of discovery platforms or through API-based solutions, to cover content from library websites, help and support materials and even the names of subject specialist librarians, all accessed through a single search box.   It certainly looks like a interesting approach and starts to make me wonder about the future of library websites being little more than a single search box.  I remember a debate with a library colleague a number of years back when we were putting in Plone as an intranet solution, and talking about whether to just let people search for content rather than design an overt navigation based around the information archictecture.beta search screenshot

The bento box approach used by Stanford is an interesting approach and something that we’ve been playing around with in a beta search we’ve been testing.  Stanford’s approach of being able to present the results in a wider display format side-by-side is better than having the restriction of stacking the boxes, but we’re constrained by our frustratingly narrow template.  But nonetheless, feedback on the approach is so far quite good.

At the moment though we’ve ended up with distinct versions of our discovery layer for staff and students (sans catalogue for students).  We’ve added in our institutional repository into the discovery local index and will ultimately probably add in metadata for our developing digital library (using OAI-PMH).  But, as seems to be the case with all discovery solutions, coverage of our collections isn’t comprehensive so local ‘front-end’ style solutions that essentially intercept a query by checking local collections and offering them as a ‘did you mean?’ may have some value to users.  But what you lose is the single index and its relevance ranking.

Responsive web design
I’ve become pretty convinced that responsive web design is a better direction for our mobile-orientated offerings rather than dedicated mobile sites.  The content mobile and tablet users are viewing on our websites are now pretty similar.  Bohyun Kim’s latest slides on ‘Improving your library’s mobile services’ did however give me a little pause with some of the common problems (slides 56 onwards) with responsive web design.   Some important lessons about cutting down content, ensuring that there are options to get out of the responsive web design (not a dissimilar problem to getting trapped in a mobile website with cutdown content when on a smartphone or tablet), and sze download filesizes.  Quite a few things to consider with RWD.

Single box search terms word cloudReading through Lown, Sierra and Boyer’s article from ACRL on ‘How Users Search the Library from a Single Search Box’ based on their work at NCSU, started me thinking about looking at some data around how people are using the single search box that we have been testing at http://www.open.ac.uk/libraryservices/beta/search/Single search box prototype.

About three months or so ago we created a prototype tool that pulls together results from the Discovery product we use (EBSCO Discovery) alongside results from the resources database that we use to feed the Library Resources pages on the library website, and including pages from the library website.  Each result is shown in a box (ala ‘bento box’) and they are just listed down the screen, with Exact Title Matches and Title Matches being shown at the top, followed by a list of Databases, Library Pages, Ebooks, Ejournals and then articles from EBSCO Discovery.  It was done in a deliberately simple way without lots of extra options to manipulate or refine the lists so we could get some very early views about how useful it was as an approach.

Looking at the data from Google Analytics, we’ve had just over 2,000 page views over the three months.  There’s a spread of more than 800 different searches Search frequency chartwith the majority (less than 10%) being repeated fewer than 6 times.  I’d suspect that most of those repeated terms are ones where people have been testing the tool.

The data also allows us to pick up when people are doing a search and then choosing to look at more data from one of the ‘bento boxes’, effectively they do this by applying a filter to the search string, e.g. (&Filter=EBOOK) takes you to all the Ebook resources that match your original search term.  So 160 of the 2,000 page views were for Ebooks (8%) and 113 f0r Ejournals (6%) for example. Search filters chart

When it comes to looking at the actual search terms then they are overwhelmingly ‘subject’ type searches, with very few journal articles or author names in the search string.  There are a few more journal or database names such as Medline or Web of Science  But otherwise there is a very wide variety of search terms being employed and it very quickly gets down to single figure frequency.  The wordle word cloud at the top of the page shows the range of search terms used in the last three months.

We’ve more work to do to look in more detail about what people want to do but being able to look at the search terms that people use and see how they filter their results is quite useful.  Next steps are to do a bit more digging into Google Analytics to see what other useful data can be gleaned about what users are doing in the prototype.

Kritikos search interface screenshotI noticed an interesting Jisc-funded project at Liverpool today that I hadn’t previously heard about (blogged by Jisc today) that talked about a method of sharing resources amongst students using a crowdsourcing approach.  The service is called Kritikos and takes several quite interesting approaches.  At the heart of the system is some work that has been done with students to identify resources relevant to their subjects (in this case Engineering) and also to identify results that weren’t relevant (often because some engineering terms have different meanings elsewhere – e.g. stress). That’s an interesting approach as one of the criticisms I’ve heard about discovery systems is that they struggle to distinguish between terms that are used across different disciplines (differentiation for example having separate meanings in mathematics and biology).

The search system uses a Google Custom Search Engine but then presents the results as images which is a fascinating way of approaching this aspect.  Kritikos also makes use of the Learning Registry to store data about students interactions with the resource and whether they found them relevant or not.  It seems to be a really novel approach to providing a search system that could go some way to address one of the common comments that we’ve been seeing in some work we’ve been doing with students. They feel that they are being deluged with too much material and struggle to find the gold nuggets that give them everything they want.

Kritikos looks to be particularly useful for students in the later stages of their degrees, where they are more likely to be doing some research or independent study.  One of the things that we are finding from our work is that students at earlier stages are less interested in what other students are doing or what they might recommend.  But possibly if they were presented with something like Kritikos they might be more inclined to see the value of other students’ recommendations.

Photograph of 'Everything is Miscellaneous' bookEverything is miscellaneous
I’ve finally got around to reading “Everything is Miscellaneous” by David Weinberger, (yes I know that is about five years after everyone else, and no real reason not to read it, other than a sense of not wanting to follow everyone else.)    I’m reading it in paperback form as we don’t seem to have it on ebook which gives it a slight sense of being older than it actually is.  Particularly with the pages in the paperback being slightly yellowing.  It is also interesting to me to pick up a library book added to stock in 2009 that has two date stamps on the date label. Two loans in four years brings home what a different world academic libraries are from public libraries.

While there’s a slight sense of things having moved on in the post – twitter world in terms of some of the technologies, it is a really interesting read with lots of things to think about and it is really making me think about the approach we take to providing access to library materials.  I am particularly thinking about how we present material through our library website, either with search tabs for articles, books etc, or by categorising library resources into journals, databases or ebooks, or even by us using different systems to manage different types of material.  As David Weinberger points out that is just a carry over from the old analogue and physical world that makes no real sense to users in a digital world.  And that is something that needs reinforcing regularly as it is easy to lose sight of that.

Tagging, sharing and perspective
One of the things that is starting to come out of our personalisation surveying and focus groups is that users want what is relevant to them.   Well, not a great surprise, but then that isn’t something that our systems really faciliate do they?  Where we are at the moment is to still think in terms of how you get something depends on what type of thing it is.  For a physical library that’s relevant in that the leaf is only on the tree in one place, to use Weinberger’s analogy.  But in the digital world, all the stuff is website content, and all the constraints are artificially created (that doesn’t mean that they are not necessary in some cases).   So you access ebooks through the catalogue because that is where we put them, often for our administrative convenience.  But users might want them in different places at different times.  But in a world where users expect to be able to shape their view of the world by customising the ‘library channel’ as you can do with Spotify or any number of web-scale services, the single ‘take-it or leave-it’ library approach seems curiously archaic. 

Discovery
So what does that mean for discovery and especially for discovery systems?  Are discovery systems the right solution?  Discovery systems and the Google-like search box are an attempt to pull stuff together into one place.  So upload your catalogue into your discovery platform and you can lose the OPAC – maybe.   It seems to me to start to pick up on relevancy ranking becoming a much more important area.  But it still doesn’t really start to approach anything that is particularly ‘socially’ or ‘user-aware’.

As a user you probably want to decide what is relevant to you, you might want to tag that content and probably share it too.  And you’d probably expect to be able to see other users tag and use them to find material relevant for you too.   But with library systems we take the view that we have to have special people who we trust to add accurate metadata.  I hate to say this, but I think that’s another legacy of the physical age and not really viable for the explosion in digital content that is upon us.

So you start to have a model where users expect the system to know something about them (what course they are on for example – does your discovery platform know that?), and to filter based on their likely interests, but then to learn from what they search for (and what others search for, or tag) to find other things they might be interested in.  I start to think that this is at the heart of user disatisfaction with library systems, there is a great disconnection with their experience of the rest of the web.

Is it feasible, could we experiment, what might that space look like?  Discovery is miscellaneous now…

Encouraged by some thinking about what sort of prototype resource usage tools we want to build to test with users in a forthcoming ‘New tools’ section I’ve been starting to think about what sort of features you could offer to library users to let them take advantage of library data.

Early steps
For a few months we’ve been offering users of our mobile search interface (which just does a search of our EBSCO discovery system) a list of their recently viewed items and their recent searches. The idea behind testing it on a mobile device Mobile search results screenwas that giving people a link to their recent searches or items viewed would make it easier for people to get back to things that they had accessed on their mobile device by just clicking single links rather than having to bookmark them or type in fiddly links. At the moment the tool just lists the resources and searches you’ve done through the mobile interface.

But our next step is to make a similar tool available through our main library website as a prototype of the ‘articles I’ve viewed’. And that’s where we start to wonder about whether the mobile version of the searches/results should be kept separate from the rest of your activities, or whether user expectations would be that, like a Kindle ebook that you can sync across multiple devices, your searches and activity should be consistent across all platforms?

At the moment our desktop version has all your viewed articles, regardless of the platform you used. But users might want to know in future which device they used to access the material maybe? Perhaps because some material isn’t easily accessible through a mobile device. But that opens up another question, in that the mobile version and the desktop version may be different URLs so you might want them to be pulled together as one resource with automatic detection of your device when you go to access the resource. Articles I've read screenshot

Next steps
With the data about what resources are being accessed and what library web pages are being accessed it starts to open up the possibility of some more user-centred use of library activity and analytics data.

So you could conceive of being able to match that there is a spike of users accessing the Athens problems FAQ page and be able to tie that to users trying to access Athens-authenticated resources. Being able to match activity with students being on a particular module could allow you to push automatically some more targeted help material, maybe into the VLE website for relevant modules, as well as flag up an indication of a potential issue to the technical and helpdesk teams.

You could also contemplate mining reading lists and course schedules to predict when there are particular activities that are scheduled and automatically schedule pushing relevant help and support or online tutorials to students. Some of the most interesting areas seem to me to be around building skills and using activity (or lack of activity) to trigger promotion of targeted skills building activities. So knowing that students on module X should be doing an activity that involves looking at this set of resources, and being able to detect the students that haven’t accessed those resources, offering them some specific help material, or even contact from a librarian. Realistically those sorts of interventions simply couldn’t be managed manually and would have to rely on some form of learning analytics-type trigger system.

One of the areas that would be useful to look at would be some form of student dashboard for library engagement. So this could give students some data about what engagement they have had with the library, e.g. resources accessed, library skills completed, library badges gained, library visits, books/ebooks borrowed etc. Maybe set against averages for their course, and perhaps with some metrics about what high-achieving students on their course last time did. Add to that a bookmarking feature, lists of recent searches and resources used, with lists of loans/holds. Finished off with useful library contacts and some suggested activities that might help them with their course based on what is know about the level of library skills needed in the course.

Before you can do some of the more sophisticated learning analytics-type activities I suspect it would be necessary is to have a better understanding of the impact that library activities/skills/resources have on student retention and achievement. And that seems to me to argue for some really detailed work to understand library impact at a ‘pedagogic’ level.

I’ve been reading a great blog post by Peter Morville on Semantic Studios ‘Inspiration Architecture: the Future of Libraries‘ and it includes a great description that really resonated

There was even a big move towards the vision of “library as platform.” Noble geeks developed elaborate schemata for open source, open API, open access environments with linked data and semantic markup to unleash innovation and integration through transparency, crowdsourcing, and mashups. They waxed poetic about the potential of web analytics and cloud computing to uncover implicit relationships and emerging patterns, identify scholarly pathways and lines of inquiry, and connect and contextualize artifacts with adaptive algorithms. They promised ecosystems of participation and infrastructures for the creation and sharing of knowledge and culture.

Unfortunately, the folks controlling the purse strings had absolutely no idea what these geeks were talking about, and they certainly weren’t about to entrust the future of their libraries (and their own careers) to the same bunch of incompetent techies who had systematically failed, for more than ten years, to simply make the library’s search box work like Google.

I’ve highlighted the last bit as it really struck home.  The great search for a library equivalent of the Google Search box is something that is familar to anyone working in trying to build better ways of helping users get to library content.  It has pretty much been a mantra over the past few years.  (for a great summary of how library search systems differ from Google look at Aaron Tay’s blogpost from last May and his blogpost on web scale discovery from December last year.)   So it’s easy to find examples of where libraries and other organisations have tried to put in place a google-like search, from the Biodiversity Heritage Library,  from the American University and from others such as the National Archives and Records Administration (reported in Information Management Journal, March 2011) and Oregon State University (paper by Stefanie Buck and Jane Nichols ‘Beyond the search box’ in Reference & User Services Quarterly March 2012.

The current generation of discovery systems (Summon, EDS, Primo etc) are largely built around the concept of a ‘google-like’ search.  As reported here for McGill University by OCLC for WorldCat Local. In some ways it seems to me that we’ve been concentrating too much on the simplicity of the original Google interface and as Lorcan Dempsey pointed out in his ‘Thirteen Ways of Looking at Libraries, Discovery and the Catalog’

‘a simple search box has only been one part of the Google formula. Pagerank has been very important in providing a good user experience, and Google has progressively added functionality as it included more resources in the search results. It also pulls targeted results to the top of the page (sports results, weather or movie times, for example), and it interjects specific categories of results in the general stream (news, images).’

So although we’ve implemented a ‘google-like’ search box it becomes apparent that it doesn’t entirely solve the problem. It’s a bit like a false summit or false peak.  You think you’ve reached the top but realise that you still have some way to go. Relevancy ranking becomes vitally important and with the Discovery service generation you’ve essentially handed that over to a vendor to control the relevancy algorithm.  You can add your local content into the system and have some control but it is limited.  And you are constrained in what you can add into the discovery platform.  Your catalogue, link resolver/knowledge base generally yes, your institutional repository yes, but your other lists of resources in simple databases, not so easily unless they happen to be OAI-PMH or MARC.

So you look at bringing together content from different systems, probably using the Bento Box approach (as used by Stanford and discussed by them here) where you search across your different systems using APIs etc and return a series of results from each of those systems.  You then get a series of results that come from each of the different systems and incorporate the relevancy ranking of discovery systems, rather than ranking the relevancy of the results in total.   So is that going to be any better for users?  Is it going to be better to sort the results by system, as Stanford have done? or should we be trying to pull results together, as Google do?  That’s something we need to test.

But there’s a nagging feeling that this still all relies on users having to come to ‘the library’ rather than being where the users are.  So OK, we can put library search boxes into the Virtual Learning Environment, so we’ve an article search tool that searches our Discovery system, but if your users start their search activity with Google, then the challenge is about going beyond Google Scholar to get library discovery up to the network level.

Frictionless resource accessOne of the particular aspects of working in Library Services at a distance-learning institution is that without a physical building, ‘the library’, at the centre of a campus-based student experience, our library is a much less visible entity.  So I was intrigued to see the write up on the Guardian’s HE blog reporting on the LISU report ‘Working together: evolving value for academic libraries’ start with the comment:

A common complaint from my librarian friends: too often users fail to appreciate that the resources they use online are only available to them because the library has purchased them. This is aggravated by confusion about what an academic library is. Researchers actively using library resources online may not think of themselves as using the library because they have not recently visited the building

It is interesting to see that as user engagement with libraries is increasingly virtual and digital rather than physical that even those libraries with a strong physical presence are also now having to grapple with similar issues of visibility.  It also brought to mind a blog post by Tom Scheinfeldt from earlier in the year about how digital technology makes the library invisible. Apart from a really interesting read and some good ideas about the sorts of services libraries should be offering in the area of collections, scholarly communications and support for data-driven research, there was one comment in the post that really struck me ‘in most cases, the library is doing its job best when it is invisible to its patrons’.

But the visibility of the library is now really important.  ‘The library resources are good enough for my needs’ is now a measure in the Higher Education Key Information Set, so if your students don’t know that a particular service or facility was provided by the library, that might affect your score in the National Student Survey.   And that makes me start to think about the direction of a lot of what we’ve tried to do over the past few years.

Tony Hirst uses a term ‘frictionless’ http://www.slideshare.net/psychemedia/jibs-keynote-draft to describe an evolving role for libraries and librarians.  So alongside lots of ideas about areas that libraries should be working in,  he describes many of the restrictions such as access and authentication as friction, in that they act as a means of slowing or regulating access.  So we do things like embedding direct links to library resources into the Virtual Learning Environment using links constructed with EZProxy that take students directly to the resource as if they were on campus.  We handle redirections and persistence with systems to try to remove some of the friction.  But does that come at the cost of visibility?  Our approach has been not to force students to come to a specific ‘library space’ but to save their time by saying ‘click on this link and it takes you to the resource you need’.  For a frictionless student experience you don’t need to know that the VLE you are using is developed and run by one department and the resources you are using are managed by another.  But if you don’t know that the resources are provided by the library, when you have to answer the question of ‘the library resources are good enough for my needs’,  what are you going to be saying?

The proposition
I’ve spent the last couple of days at a fascinating JISC/SCONUL workshop, ‘The Squeezed Middle? Exploring the future of Library Systems.  ‘The Squeezed Middle’ referring to the concentration of attention in recent months on electronic resource management (in the SCONUL Shared Services and Knowledge Base + activities) and Discovery Systems (such as Summon, EDS and Primo), that has rather taken the focus away from other library systems, notably the Library Management System.  In part, it was explained, this was deliberate, as developments in open source LMS (such as Kuali OLE and Evergreen) and developments of new systems such as Alma from ExLibris that look at unifying print, electronic and digital resource management, have been (and still are) in development and there needs to be some maturity.  But we are now starting to see these developments moving on and open source starting to be adopted (by Staffordshire University library for example).  So the time is right to start to focus on these systems afresh.

The workshop
Punctuating the workshop were a series of deliberately provocative and challenging ‘visions’ of the future library of 2020 and a video from Lorcan Dempsey.  [Paul Walk has blogged his here.] Against this background we looked at several topics such as collections, space, systems and expertise around the library systems domain.  Overnight we looked at a series of sixty-odd themes and activities and followed that up today looking at prioritisation and value of those activities to try to understand what might be some priority tasks.

Reflections
A few things came to mind for me during and after the workshop.  Firstly, there maybe isn’t a clear definition of the boundaries of this space and really no common view of what aspects of print/electronic/digital processes and collections we are scoping and addressing.  It also struck me that a lot of the issues, concerns and priorities were about data rather than systems or processes.  So they included topics such as licenses for ebooks, open bibliographic metadata, passing data to institutional finance systems and activity data for example.  I do find it particularly interesting that despite the effort that goes into the data that libraries consume, there are some really big tasks to address to flow data around our systems without duplication or unnecessary activity.  (Incidentally, there’s a concept used in Customer Care, termed ‘Unnecessary contact’ and there used to be a National Indicator NI14 where local government had to reduce unnecessary contact.  In other words reduce the instances where customers have to contact you for further clarification. So you aim to deal with the issue at first point of contact.  I start to wonder whether there’s a similar concept that we might apply to libraries when we carry out extra processing and cataloguing instead of taking ‘shelf-ready’ books and downloaded bibliographic records – unneccessary refinements maybe?)

I also found it interesting how the topic of reading list solutions came up as a hot issue.  It’s a particular interest to me given involvement in the OU’s TELSTAR reference management project. The Reading-List-Solutions JISCMail list has been busy in the last week talking about the various systems (often developed in house).  And it was really fascinating to see how such a fundamental and time-consuming part of our daily work hasn’t really been solved, let alone integrated completely into the procurement and discovery workflow. Although I know that there’s some significant complexity there I find that particularly strange that it hasn’t been a feature built into the LMS.

Final thoughts or library systems of the future

It seems to me that there are some general principles that you could think about for future library systems in this space.  And I suppose I’m thinking beyond the next generation of systems such as Alma.  And these may be completely of-the-wall ideas.  But there are few things that come to mind as we move towards 2020. So what might a 2020 LMS look like?

> the systems are component’ised (think Drupal CMS), so both libraries and users can choose which components they use.  And they are largely about flowing data, workflow and process rather than about storing data.

> users control their own profiles (and data) – we (institutions) give them a ‘key’ to access collections we have paid for (so authentication is at a network level or with aggregators?)

> catalogues are distributed – linked data uses the most appropriate vocabularies, most not even run by libraries – local elements are added at the time you choose to procure – there is no ‘catalogue record’ as such but a collection of descriptive elements – you choose where you get your records from, but you don’t download them to ‘your’ lms database

> discovery interface is at the choice of the user – collections are packaged/streamed? and contributed to the aggregators

> rather than a model where libraries buy licensed content and then run systems for their users to access that content – so all institutions largely duplicate their systems – the content owners/aggregators provide the access maybe? as they already start to do with discovery systems?

> there is a ‘rump’ of an LMS database that is your audit trail of transactions and holdings (but with network-level unique IDs that link to descriptive data held at the network level), statistics are held in the cloud (JUSP+++),

> so we contribute our special digital and electronic collections – either to national scale repositories or to open discovery systems?

Maybe not very realistic and fanciful, but something that is a world away from the monolithic LMS that even the open source and new generation systems seem to be building.

All round it was a really good and enjoyable workshop and I’m glad I had the opportunity to go.  I hope the stuff we’ve done helps to inform the future thinking and directions.  Thanks to SCONUL and JISC for organising/funding it and to Ben Showers and David Kay.

Search box on library websiteA couple of tweets today flagged up Andrew Asher’s paper on Search Magic on his ‘An Anthropology of Algorithms’ blog (a great title for a blog).  As he explains in the paper it is based on research he has been conducting into how students find and use information as part of the ERIAL project.

Student search behaviour is something that is of great interest to me as I work at a University that delivers courses at a distance so library search is one of the main ways that students interact with our library.  We’ve grappled with the challenge of how we present library search for a while and I’ve blogged about it before a couple of times, most recently here.

So it is really good to see Andrew’s thoughts and research into library search.  It’s interesting to read about the rise of the secretive ‘algorithmic culture’ that he describes as it really starts to explain the trust that users invest in search engines like Google and the implications that this has for library search systems.  We’ve all recognised the impact that Google has on student expectations and Andrew clearly identifies the simplicity and single search box and simple keyword as being something that libraries have been trying to mimic.  Given that library resources have rather less internal coherence (e.g. the typical federated search systems) than Google’s search index then maybe it’s not surprising that the record is mixed.

The figures Andrew reports clearly show students using library search systems as they would Google which leads to problems with too little or too many search results appearing.  That is a problem that is all too familar to users of the new generation Discovery systems such as Ebsco Discovery and Summon.  As Andrew points out these systems also use relevance ranking algorithms that they can be quite proprietary about.

I suppose I’m not surprised that students largely aren’t using what librarians would consider to be the most appropriate search tool for their particular enquiry.  They use what they have had success with in the past.  At undergraduate level at least I’m not surprised that students don’t have the knowledge of which is the most appropriate database to use.  That’s a skill that librarians have had to master and although we all do a lot to try to get this type of domain search information across it clearly doesn’t get through.  But perhaps the concentration of effort on ‘one-stop’ type discovery searches is obscuring that message?

Andrew also covers students skills in evaluating (0r not evaluating) the quality of results and the self-perpetuating loop of trusting results listed on the first page.  Certainly the examples of students deciding that because their search didn’t turn up any results ‘then the information must not exist and they should give upon the topic’ are familar.

A really fascinating and useful paper and piece of research into student search behaviour and something I look forward to hearing more about.

Twitter posts

Categories

Calendar

January 2020
M T W T F S S
« Mar    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Creative Commons License