You are currently browsing the monthly archive for March 2014.

For a little while I’ve been trying to find some ways of characterising the different generations or ages of library ‘search’ systems.  By library ‘search’ I’ve been thinking in terms of tools to find resources in libraries (search as a locating tool) as well as the more recent trend (athough online databases have been with us for a while) of search as a tool to find information. Library search ages

I wanted something that I could use as a comparison that picked up on some of the features of library search but compared them with some other domain that was reasonably well known.  Then I was listening to the radio the other day and there was some mention that it was the anniversary of the 45rpm single, and that made me wonder whether I could compare the generations of library search against the changes in formats in the music industry.

My attempt at trying to map them across is illustrated here.  There are some connections – both discovery systems and the likes of spotify streaming music systems are both cloud hosted.  Early printed music scores and the printed library catalogue such as the original British Museum library catalogue.  I’m not so sure about some of the stages in between though, certainly the direction for both has been to make library/music content more accessible.  But it seemed  like a worthwhile thing to think about and try it out. Maybe it works, maybe not.

 

Advertisements

We’ve been using Trello (http://trello.com) as a tool to help us manage the lists of tasks in the digital library/digital archive project that we’ve been running.  After looking at some of our existing tools (such as Mantis Bug Tracker for example) the team decided that they didn’t really want the detailed tracking features and didn’t feel that our standard project management tools (MS Project and the One Page Project Manager, or Outlook tasks) were quite what we needed to keep track of what is essentially a ‘product backlog‘, a list of requirements that need to be developed for the digital archive system.

Trello’s simplicity Trello desktop screenshotmakes it easy to add and organise a list of tasks and break them down into categories, with colour-coding and the ability to drag tasks around from one stream to another.  Being able to share the board across the team and assign members to the task is good.  You can also set due dates and attach files, which we’ve found useful to use to attach design and wireframe illustrations.  You can set up as many different boards as you need to so can breakdown your tasks however you want to.  The boards scroll left and right so you can go to as many columns as you need to.

We’ve been using it to group together priority tasks into a list so the team know which tasks to concentrate on, and when the tasks are done the team member can update the task message so each task can be checked and cleared off the list.

Trello ipad screenshot We’re mainly using Trello on the desktop straight from the website, although there is also an ipad app that seems to work well.  For a fairly small team with just a single developer Trello seems to work quite well.  It’s simple and easy to use and doesn’t take a lot of effort to keep up to date, it’s a practical and useful tool.   If you had a larger project you might want to use more sophisticated tools that have some ability to track progress and effort and produce burndown charts for example, but as a simple way of tracking a list of tasks to be worked on, it’s a useful project tool.

 

 

 

Highland cow and calf at InversnaidI’m always looking to find out about the tools and techniques that people are using to improve their websites, and particularly how they go about testing the user experience (or UX) to make sure that they can make steady improvements in their site.

So I’m a regular follower of some of the work that is going on in academic libraries in the US (e.g. Scott Young talking about A/B testing and experiments at Montana , and Matt Reidsma talking about Holistic UX).    It was particularly useful to find out about the process that led to the three headings on the home page of Montana State University library, and the stages that they went through before they settled on Find, Request and Services.Montana State University Library website homepage  A step-by-step description showing the tools and techniques is a really valuable demonstration of how they went about the process and how they made their decision.  It is interesting to me how frequently libraries seem not to pick the right words to describe their services, and don’t pick words that make sense to their users.  But it’s really good to see an approach that largely gets users to decide on what works by testing what works, rather than asking users what they prefer.

Something else that I came across the other week was the term ‘guerilla testing’ applied to testing the usability of websites (I think that probably came from the session on ‘guerilla research’ that Martin Weller and Tony Hirst ran the other week that I caught up with via their blog posts/recording).  That led on to ‘Guerilla testing‘ on the Government Service Design Manual (there’s some sense of slight irony for me about guerilla testing being documented – in some detail – in a design manual) – but the way it talks through the process, its strengths and weaknesses is really useful and it made me think about the contrast between that approach and the fairly careful and deliberate approach that we’ve been talking with our work over  the last couple of months.  Some things to think about.

Reflections on our approach
It’s good to get an illustration from Montana of the value of the A/B testing approach.  It’s a well-evidenced and standard approach to web usability but I know that it is something that we’ve found difficult to use in a live environment as it makes our helpdesk people anxious that they aren’t clear what version of the website customers might be seeing.  So we’ve tended to use step by step iterations rather than straightforward A/B testing.  But something I think to revisit.

The piece of work that we’re concentrating on at the moment is to look at student needs from library search.  We know it’s a pain point for students, we know it’s not ‘like Google’ and isn’t as intuitive as they feel it should be.  So we’re trying to gain a better understanding of what we could do to make it a better experience (and what we shouldn’t do).  So we’re working with a panel of students who want to work with the library to help create better services.

The first round tested half a dozen typical library resource search scenarios against eight different library search tools (some from us and some from elsewhere) with around twenty different users.   We did all our testing as remote 1:1 sessions using Teamviewer software (although you could probably use Skype or a number of alternatives) and were able to record the sessions and have observers/note takers.  We’ve assessed the success rate for each scenario against each tool and also measured the average time it took to complete each task with each tool (or the time before people gave up).  These are giving us a good idea of what works and what doesn’t.

For the second round a series of wireframes were created using Balsamiq to test different variants of search boxes and results pages.  So we ran a further set of tests again with the panel and again remotely.  We’ve now got a pretty good idea of some of the things that look like they will work so have started to prototype a real search tool.  We’re now going to be doing a series of iterative development cycles, testing tools with students and then refining them.  That should greatly improve our understanding of what students want from library search and allow us to experiment with how we can build the features they want.

Twitter posts

Categories

Calendar

March 2014
M T W T F S S
« Jan   Apr »
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Creative Commons License