There seems to have been a flurry of activity around reading system systems in recent weeks.  There’s the regular series of announcements of new customers for Talis Aspire which seems to clearly be the market-leader in this class of systems but there’s also been two particular examples of the integration of reading list systems into Moodle.

Firstly, the University of Sussex have been talking about their integration of Aspire into Moodle.  Slides from their presentation at ALRG are available from their repository.  There is also a really good video that they’ve put together that shows how the integration works in practice.  The video shows how easy it seems to be to add a section from a reading list directly into a moodle course.  It looks like a great example of integration that seems mostly to have been done without using the Aspire API.   One question I’d have about the integration is whether it automatically updates if there are changes made to the reading list, but it looks like a really neat development.

The other reading list development comes from EBSCO with their Curriculum Builder LMS plugin for EBSCO Discovery.   There’s also a video for this showing an integration with moodle.   This development makes use of the IMS Learning Tools Interoperability standard (LTI) to achieve the integration.   The approach mainly seems to be looked at from the Discovery system with features to let you find content in EBSCO Discovery and then add it to a Reading List, rather than being a separate reading list builder system.  It’s interesting to see the tool being looked at from the perspective of a course creator developing a reading list and useful to have features such as notes for each item on a list.  What looks to be different from the Sussex approach is that when you go to the reading list from within Moodle you are being taken out of Moodle and don’t see the list of resources in-line in Moodle.

There’s a developing resource bank of information on Helibtech at http://helibtech.com/Reading_Resource+lists that is useful to keep an eye on developments in this area.

Liblink admin screen The approach we’ve been taking is with a system called Liblink (which incidentally was shortlisted this year for the Times Higher Education Leadership and Management awards for Departmental ICT Initiative of the Year).  Liblink developed out of a system created to manage dynamic content for our main library website, for pages like http://www.open.ac.uk/library/library-resources/statistics-sources

The concept was to pull resources from a central database that was being updated regularly with data from systems such as SFX and the library catalogue.  This ensured that the links were managed and that there was a single record for each resource.  It then became obvious that the system, with some development, could replace a clutch of different resource list and linking systems that had been adopted over the years and could be used as our primary tool to manage linking to resources.  The tool is designed to allow us to push out lists of resources using RSS so they can be consumed by our Moodle VLE, but the tool also offers different formats such as html, plain text and RIS.

 

 

 

 

I picked up over the weekend via the No Shelf Required blog that EBSCO Discovery usage data is now being added into Plum Analytics.    EBSCO’s press release talks about providing “researchers with a much more comprehensive view of the overall impact of a particular article”.   Plum Analytics have fairly recently been taken over by EBSCO (and here) so it’s not so surprising that they’d be looking at how EBSCO’s data could enhance the metrics available through Plum Analytics.

It’s interesting to see the different uses that activity data in this sphere can be put to.  There are examples of it being used to drive recommendations, such as hot articles, or Automated Contextual Research Assistance. LAMP is talking of using activity data for benchmarking purposes.  So you’re starting to see a clutch of services-being driven by activity data just as the like’s of Amazon drive so much of what appears on their sales site by data driven by customer activity.

Beadnell wadersFor a few months now we’ve been running a project to look at student needs from library search.  The idea behind the research is that  we know that students find library search tools to be difficult compared with Google, we know it’s a pain point.  But actually we don’t know in very much detail what it is about those tools that students find difficult, what features they really want to see in a library search tool, and what they don’t want.   So we’ve set about trying to understand more about their needs.  In this blog post I’m going to run through the approach that we are taking.  (In a later blog post hopefully I can cover some detail of the things that we are learning.)

Approach
Our overall approach is that we want to work alongside students (something that we’ve done before in our personalisation research) in a model that draws a lot of inspiration from a co-design approach. Instead of building something and then usability testing it with students at the end we want to involve students at a much earlier stage in the process so for example they can help to draw up the functional specification.

We’re fortunate in having a pool of 350 or so students who agreed to work with us for a few months on a student panel.  That means that we can invite students from the panel to take part in research or give us feedback on a small number of different activities.  Students don’t have to take part in a particular activity but being part of the panel means that they are generally pre-disposed to working with us.  So we’re getting a really good take-up of our invitations – I think that so far we had more than 30 students involved at various stages, so it gives us a good breadth of opinions from students studying  different subjects, at different study levels and with different skills and knowledge.

We’ve split the research into three different stagesDiscovery research stages: an initial stage that looked at different search scenarios and different tools; a second stage that drew out of the first phase some general features and tried them on students, then a third phase that creates a new search tool and then undertakes an iterative cycle of develop, test, develop, test and so on.  The diagram shows the sequence of the process.

The overall direction of the project is that we should have a better idea of student needs to inform the decisions we make about Discovery, about the search tools we might build or how we might setup the tools we use.

As with any research activities with students we worked with our student ethics panel to design the testing sessions and get approval for the research to take place.

Phase One
We identified six typical scenarios – (finding an article from a reference,  finding a newspaper article from a reference, searching for information on a particular subject, searching for articles on a particular topic, finding an ebook from a reference and finding the Oxford English Dictionary).   All the scenarios were drawn from activities that we ask students to do, so used the actual subjects and references that they are asked to find.  We identified eight different search tools to use in the testing  – our existing One stop search, the mobile search interface that we created during the MACON project, a beta search tool that we have on our library website, four different versions of search tools from other Universities and Google Scholar.  The tools had a mix of tabbed search, radio buttons, bento-box-style search results, chosen to introduce students to different approaches to search.

Because we are a distance learning institution, students aren’t on campus, so we set up a series of online interviews.  We were fortunate to be able to make use of the usability labs at our Institute of Educational Technology and used Teamviewer software for the online interviews.  In total we ran 18 separate sessions, with each one testing 3 scenarios in 3 different tools.  This gave us a good range of different students testing different scenarios on each of the tools.

Sessions were recorded and notes were taken so we were able to pick up on specific comments and feedback.  We also measured success rate and time taken to complete the task.  The features that students used were also recorded.  The research allowed us to see which tools students found easiest to use, which features they liked and used, and which tools didn’t work for certain scenarios.

Phase two
For the second phase we chose to concentrate on testing very specific elements of the search experience.  So for example, we looked at radio buttons and drop-down lists, and whether they should be for Author/Title/Keyword or Article/Journal title/library catalogue.  We also looked at the layout of results screens, and the display of facets, to ask students how they wanted to see date facets presented for example.Discovery search mockup

We wanted to carry out this research with some very plain wireframes to test individual features without the distraction of website designs confusing the picture.  We tend to use a wireframing tool called Balsamiq to create our wireframes rapidly and we ran through another sequence of testing, this time with a total of 9 students in a series of online interviews, again using teamviewer.

By using wireframing you can quickly create several versions of a search box or results page and put them in front of users.  It’s a good way of being able to narrow down the features that it is worth taking through to full-scale prototyping.  It’s much quicker than coding the feature and once you’ve identified the features that you want your developer to build you have a ready-made wireframe to act as a guide for the layout and features that need to be created.

Phase three
The last phase is our prototype building phase and involves taking all the research and distilling that into a set of functional requirements for our project developer to create.  In some of our projects we’ve shared the specification with students so they can agree which features they wanted to see, but with this project we had a good idea from the first two phases what features they wanted to see in a baseline search tool, so missed out that stage.  We did, however, split the functional requirements into two stages: a baseline set of requirements for the search box and the results; and then a section to capture the iterative requirements that would arise during the prototyping stage.  We aimed for a rolling-cycle of build and test although in practice we’ve setup sessions for when students are available and then gone with the latest version each time – getting students to test and refine the features and identify new features to build and test.  New features get identified and added to what is essential a product backlog (in scrum methodology/terminology).  A weekly team meeting prioritises the task for the developer to work on and we go through a rolling cycle of develop/test.

Reflections on the process
The process seems to have worked quite well.  We’ve had really good engagement from students and really good feedback that is helping us to tease out what features we need to have in any library search tool.  We’re about half way through phase three and are aiming to finish off the research for the end of July.  Our aim is to get the search tool up as a beta tool on the library website as the next step so a wider group of users can trial it.

Catching up this week with some of the things from last week’s UKSG conference so I’ve been viewing some of the presentations that have been put up on YouTube at https://www.youtube.com/user/UKSGLIVE   There were a few that were of particular interest, especially those covering the Discovery strand.

The one that really got my attention was from Simone Kortekaas from Utrecht University talking about their decision to move away from discovery by shutting down their own in-house developed search system and now looking at shutting down their WebOPAC.  The presentation is embedded below

I found it interesting to work through the process that they went through, from realising that most users were starting their search elsewhere than the library (mainly Google Scholar) and so deciding to focus on making it easier for users to access library content through that route, instead of trying to focus on getting users to come to the library, to a library search tool.  It recognises that other players (i.e. the big search engines) may do discovery better than libraries.

I think I’d agree with the principle that libraries need to be where there users are.  So providing holdings to Google Scholar so the ‘find it at your library’ feature works and providing bookmarklet tools (e.g. http://www.open.ac.uk/library/new-tools/live-tools) to help users login are all important things to do.  But whilst Google and Bing now seem to be better at finding academic content they still lack Google Scholar’s ‘Library links’ feature and the ability to upload your holdings that would allow you to offer the same form of ‘Find it at the…’ feature in those spaces.  And with Google Scholar you always worry about how ‘mainstream’ it is considered.

It is an interesting direction to take as a strategic decision and means that you need to carefully monitor (as Utrecht do) trends in user activity and in particular changes in those major search engines to make sure that your resources can be found through major search engines.   One consequence is that users are largely being taken to publisher websites to access the content and we know that the variations in these sites can cause users some difficulty/confusion.  But it’s an approach to think about as we see where the trend for discovery takes us.

 

For a little while I’ve been trying to find some ways of characterising the different generations or ages of library ‘search’ systems.  By library ‘search’ I’ve been thinking in terms of tools to find resources in libraries (search as a locating tool) as well as the more recent trend (athough online databases have been with us for a while) of search as a tool to find information. Library search ages

I wanted something that I could use as a comparison that picked up on some of the features of library search but compared them with some other domain that was reasonably well known.  Then I was listening to the radio the other day and there was some mention that it was the anniversary of the 45rpm single, and that made me wonder whether I could compare the generations of library search against the changes in formats in the music industry.

My attempt at trying to map them across is illustrated here.  There are some connections – both discovery systems and the likes of spotify streaming music systems are both cloud hosted.  Early printed music scores and the printed library catalogue such as the original British Museum library catalogue.  I’m not so sure about some of the stages in between though, certainly the direction for both has been to make library/music content more accessible.  But it seemed  like a worthwhile thing to think about and try it out. Maybe it works, maybe not.

 

We’ve been using Trello (http://trello.com) as a tool to help us manage the lists of tasks in the digital library/digital archive project that we’ve been running.  After looking at some of our existing tools (such as Mantis Bug Tracker for example) the team decided that they didn’t really want the detailed tracking features and didn’t feel that our standard project management tools (MS Project and the One Page Project Manager, or Outlook tasks) were quite what we needed to keep track of what is essentially a ‘product backlog‘, a list of requirements that need to be developed for the digital archive system.

Trello’s simplicity Trello desktop screenshotmakes it easy to add and organise a list of tasks and break them down into categories, with colour-coding and the ability to drag tasks around from one stream to another.  Being able to share the board across the team and assign members to the task is good.  You can also set due dates and attach files, which we’ve found useful to use to attach design and wireframe illustrations.  You can set up as many different boards as you need to so can breakdown your tasks however you want to.  The boards scroll left and right so you can go to as many columns as you need to.

We’ve been using it to group together priority tasks into a list so the team know which tasks to concentrate on, and when the tasks are done the team member can update the task message so each task can be checked and cleared off the list.

Trello ipad screenshot We’re mainly using Trello on the desktop straight from the website, although there is also an ipad app that seems to work well.  For a fairly small team with just a single developer Trello seems to work quite well.  It’s simple and easy to use and doesn’t take a lot of effort to keep up to date, it’s a practical and useful tool.   If you had a larger project you might want to use more sophisticated tools that have some ability to track progress and effort and produce burndown charts for example, but as a simple way of tracking a list of tasks to be worked on, it’s a useful project tool.

 

 

 

Highland cow and calf at InversnaidI’m always looking to find out about the tools and techniques that people are using to improve their websites, and particularly how they go about testing the user experience (or UX) to make sure that they can make steady improvements in their site.

So I’m a regular follower of some of the work that is going on in academic libraries in the US (e.g. Scott Young talking about A/B testing and experiments at Montana , and Matt Reidsma talking about Holistic UX).    It was particularly useful to find out about the process that led to the three headings on the home page of Montana State University library, and the stages that they went through before they settled on Find, Request and Services.Montana State University Library website homepage  A step-by-step description showing the tools and techniques is a really valuable demonstration of how they went about the process and how they made their decision.  It is interesting to me how frequently libraries seem not to pick the right words to describe their services, and don’t pick words that make sense to their users.  But it’s really good to see an approach that largely gets users to decide on what works by testing what works, rather than asking users what they prefer.

Something else that I came across the other week was the term ‘guerilla testing’ applied to testing the usability of websites (I think that probably came from the session on ‘guerilla research’ that Martin Weller and Tony Hirst ran the other week that I caught up with via their blog posts/recording).  That led on to ‘Guerilla testing‘ on the Government Service Design Manual (there’s some sense of slight irony for me about guerilla testing being documented – in some detail – in a design manual) – but the way it talks through the process, its strengths and weaknesses is really useful and it made me think about the contrast between that approach and the fairly careful and deliberate approach that we’ve been talking with our work over  the last couple of months.  Some things to think about.

Reflections on our approach
It’s good to get an illustration from Montana of the value of the A/B testing approach.  It’s a well-evidenced and standard approach to web usability but I know that it is something that we’ve found difficult to use in a live environment as it makes our helpdesk people anxious that they aren’t clear what version of the website customers might be seeing.  So we’ve tended to use step by step iterations rather than straightforward A/B testing.  But something I think to revisit.

The piece of work that we’re concentrating on at the moment is to look at student needs from library search.  We know it’s a pain point for students, we know it’s not ‘like Google’ and isn’t as intuitive as they feel it should be.  So we’re trying to gain a better understanding of what we could do to make it a better experience (and what we shouldn’t do).  So we’re working with a panel of students who want to work with the library to help create better services.

The first round tested half a dozen typical library resource search scenarios against eight different library search tools (some from us and some from elsewhere) with around twenty different users.   We did all our testing as remote 1:1 sessions using Teamviewer software (although you could probably use Skype or a number of alternatives) and were able to record the sessions and have observers/note takers.  We’ve assessed the success rate for each scenario against each tool and also measured the average time it took to complete each task with each tool (or the time before people gave up).  These are giving us a good idea of what works and what doesn’t.

For the second round a series of wireframes were created using Balsamiq to test different variants of search boxes and results pages.  So we ran a further set of tests again with the panel and again remotely.  We’ve now got a pretty good idea of some of the things that look like they will work so have started to prototype a real search tool.  We’re now going to be doing a series of iterative development cycles, testing tools with students and then refining them.  That should greatly improve our understanding of what students want from library search and allow us to experiment with how we can build the features they want.

ipad screenshotGreat though touch-screens on tablets and smartphones are, one of the drawbacks with them that I’ve found is that the experience of typing on them isn’t a particularly nice experience.  It’s all too easy to type the wrong character and it’s one of the things that is always frustrating about typing notes on an ipad, how much time you have to spend correcting what you’ve typed.   So I was really interested to see a tweet today about a technology that has been around for a litle while that makes raised buttons appear from the surface of a touch screen when needed.  Checking out the article from Business Insider and then browsing around for some other information about the technology, including this Techcrunch blog post and the website for Tactus Technology, the company developing this idea, and it looks like a really interesting idea that could make typing on a tablet a much nicer experience and avoid having to cart around a chunk of peripherals such as add on keyboards.

Essentially the technology seems to consist of a fluid layer that can generate raised buttons as and when needed. It’s quite intriguing to see buttons suddenly morph (?) out of a flat screen.  But what you get is a small raised button that looks like it will be easier to touch and reduce the chance of mistaken keystrokes.  I’d be intrigued to find out what the buttons actually ‘feel’ like but they look like being a really useful feature.

Ideally this technology would be integrated into the design of the smartphone or tablet and driven by the software although I see that they’ve also worked on an interim approach using a case.  It will be really interesting to see how they get on with getting this technology integrated into mainstream devices and when we might see the first production examples of the technology.  It also strikes me to wonder whether the fine-definition of the technology would let you develop a tablet that could display braille writing.

Photgraph of RobiniBeacon is an interesting piece of location-based Apple technology and I started wondering about how useful it might be in a library context.  Essentially (as this article from the Guardian describes) it is being sold as a micro-broadcast technology where transmitters can communicate with nearby smartphones.  So there have been applications that have been proposed to allow shops to send you messages about special offers for example as you walk past, a sort of advertising sandwich-board I suppose.

But that technology might be interesting in a library context.  You could see it directing you to where there is a public PC that is available for use, or telling you when you enter a library that something you have reserved is available for collection (or reminding you of things that are due for return).  You could envisage it flagging up new resources as you walk round different sections in a library, or maybe tell you about library events related to that section.  Browsing the fiction, maybe you might be interested in knowing about the ebooks that are available, or knowing about the book group that meets?

I wonder about how it might relate to the RFID tags in many libraries now and whether you could combine the technologies to use your phone to direct you round the library to find the book you wanted, and maybe to borrow it without ever needing to go near the self-service machines or a library checkout desk?

Most of the time Photograph of bee on teaselmy interest is about making sure that users of websites can get access to an appropriate version of the website, or that the site works on a variety of different devices.  But as websites become more personalised, my version of your website might look different to your version.

But one of the other projects that I’m involved with is looking at web archiving of University websites, mainly internal ones that aren’t being captured by the Internet Archive or the UK Web Archive.   And personalisation and different forms that websites can take is one of the really big challenges for capturing web sites.  So I was interested to read a recent article in D-Lib Magazine ‘A method for identifying personalised representations in web archives’ by Kelly, Brunelle, Weigle and Nelson, D-Lib Magazine, November/December 2013, Vol. 19, number 11/12 doi:10.1045/november2013-kelly http://www.dlib.org/dlib/november13/kelly/11kelly.html

This article describes how the user-agent string in mobile browsers is used to serve different versions of webpages.  They show some good examples from CNN of the completely different representations that you might see on iphones, desktops and android devices.  The paper goes on to talk through some possible solutions to identify different versions and suggests a modification of the Wayback machine engine to allow the user to choose which versions of a user-agent you may want to view from an archive.  Combined with the memento approach that offers time-based versions of a website it’s interesting to see an approach that starts to look at ways of capturing the increasingly fragmented and personalised nature of the web.

Twitter posts

Categories

Calendar

July 2014
M T W T F S S
« Jun    
 123456
78910111213
14151617181920
21222324252627
28293031  

Creative Commons License

Follow

Get every new post delivered to your Inbox.

Join 43 other followers