You are currently browsing the category archive for the ‘usability’ category.

I think it was the quiet concentration that made the first impression on me.  Going into a room where a group of library staff were busy carrying out a cognitive mapping exercise.  Everyone was engrossed in the activity, heads down, trying out a technique that was new to most of them.

This was part of a different type of staff meeting – with @mirya and @d_r_jenkins running a great session on UX and ethnography and intoducing people to three different UX techniques: cognitive mapping, directed storytelling and love-letters/break-up letters.  With only an hour to run the introduction and try out the techniques it was quite a short time, but enough to give people a flavour of the power of these approaches.

It’s been a bit of a journey to get to this point.  About eighteen months ago we identified ethnographic techniques as being potentially immensely valuable and something we needed to know more about, experiment with and use as part of our UX practice.  The UXLibs conferences and the presentations and blogs about the topic got us up to speed enough to see the potential and to start to talk to people here about it.  Primarily we’ve been looking at the approaches from the perspective of how they can be used in our digital service development work around websites but the wider potential is clear.  The Futurelib initiative at Cambridge has been really useful to demonstrate the potential of the techniques.  So when the chance came to send some people to a UX day organised by a neighbouring institution with Andy Priestner (@andytraining) that was a great opportunity to spread knowledge about the techniques across the library.

We’re already using these techniques in online sessions with students looking at the future direction of our library websites as part of our digital services work.  Our Research Support team are using them with research students in face-to-face sessions.   And the session with library staff quickly brought up some other examples where people soon started to see other work where they could be used, in work with tutors maybe.

It was great to see such engagement and enthusiasm with the approach and really interesting to see the different maps that people  drew in the cognitive mapping exercise.  Given that we are a group of staff using a standard set of equipment (PCs, ipads for example) and tools it was remarkable how much variation there was in the maps.  That gives a lot of food for thought for the digital capabilities project that is just getting started.

 

 

 

I’m not sure how many people will be familar with the work of Oliver Postgate, and specifically of his stop-motion animation series, The Clangers.  One of the characters in the series is Major Clanger, and he’s an inventor.

Image from Kieran Lamb via https://flic.kr/p/dqthAU

Image Kieran Lamb from https://flic.kr/p/dqthAU

The character always comes to mind to me when thinking about development approaches as an example of a typical approach to user engagement.  So the scene opens with a problem presenting itself.  Major Clanger sees the problem and thinks he has an idea to solve it, so he then disappears off and shuts himself away in his cave.  Cue lots of banging and noises as Major Clanger is busy inventing a device to solve ‘the problem’.  Then comes the great unveilling of the invention, often accompanied by some bemusement from the other Clangers about what the new invention actually is, how it works and what it is supposed to do.  Often the invention seems to turn out to not be quite what was wanted or has unforseen consequences.  And that approach seems to me to characterise how we often ‘do’ development.  We see a problem, we may ask users in a focus group or workshop to define their requirements, but then all too often we go and, like Major Clanger, build the product in complete isolation and then unveil it to users in what we describe as usability testing.  And all too often they go ‘yeh, that’s not quite what we had in mind’ or ‘well that would have been good when we were doing X but now we want something else’.

So how do we break that circle and solve our users problems in a better development style that builds the products that users can and will use?  That’s where I think that a more co-operative model of user engagement comes in.  It starts with a different model of user engagement, where users are involved throughout the requirements, development and testing stages.  And that’s an approach that we’ve started to call ‘co-design‘, and have piloted during our discovery research.

It starts with a Student Panel of students who agree to work with us in activities to improve library services.  We recruit cohorts of a few dozen students with a committment to carry out several activities with us during a defined period.   We outline the activity we are going to undertake and the approach we will take and make sure we have the necessary research/ethical approvals for the work.

For the discovery research we went through three stages:

  1. Requirements gathering – in this case testing a range of library search tools with a series of exercises based on typical user search activities.  This helped to identify the typical features users wanted to see, or did not want to see.  For example, at this stage, we were able to rule out using the ‘bento box’ results approach that has been popular at some other libraries
  2. Feature definition – a stage that allows you to investigate in detail some specific features – in our case we used wireframes of search box options and layouts and tested them with a number of Student panel members – ruling out tabbed search approaches and directing us much more towards a very simple search box without tabs or drop-downs.  This stage lets you test a range of different features without the expense of code development, essentially letting you refine your requirements in more detail.
  3. Development cycles – this step took the form of a sequence of build and test cycles, creating a search interface from scratch using the requirements identified in stages one and two, and then refining it, testing specific new features and discarding or retaining them depending on user reactions.  This involved working with a developer to build the site and then work through a series of development and test ‘sprints, testing features identified either in the early research or arising from each of the cycles.

These steps took us to a viable search interface and built up a pool of evidence that we used to setup and customise Primo Library Search.  That work led to further stages in engagement with users as we went through a fourth stage of usability testing the interface and making further tweaks and adjustments in the light of user reactions.  Importantly it’s an on-going process with a regular cycle of testing with users to continually improve the search tool.  The latest testing is mainly around changes to introduce new corporate branding, but includes other updates that can be made to the setup or the CSS of the site in advance of new branding being applied.

The ‘co-design’ model also fits with a more evolutionary or incremental approach to website development and is a model that usability experts such as Nielsen Norman Group often recommend as users generally want a familiar design rather than a radical redesign.  Continuous improvement systems typically expect incremental improvements as the preferred approach.  Yet the ‘co-design’ model could equally be deployed for a complete site re-design, starting from scratch with a more radical design and structural changes and then using the incremental approach to refine them into a design that meets user needs and overcomes the likely level of resistence by users familar with the old site, by delivering an improved user experience to which users can quickly get comfortable with.

SunsetIn the early usability tests we ran for the discovery system we implemented earlier in the year one of the aspects we looked at were the search facets.   Included amongst the facets is a feature to let users limit their search by a date range.  So that sounds reasonably straight-forward, filter your results by the publication date of the resource, narrowing your results down by putting in a range of dates.  But one thing that emerged during the testing is that there’s a big assumption underlying this concept.  During the testing a user tried to use the date range to restrict results to journals for the current year and was a little baffled why the search system didn’t work as they expected.  Their expectation was that by putting in 2015 it would show them journals in that subject where we had issues for the current year.  But the system didn’t know that issues that were continuing and therefore had a date range that was open-ended were available for 2015 as the metadata didn’t include the current year, just a start date for the subscription period.  So consequently the system didn’t ‘know’ that the journal was available for the current year.  And that exposed for me the gulf that exists between user and library understanding and how our metadata and systems don’t seem to match user expectations.  So that usability testing session came to mind when reading the following blog post about linked data.

I would really like my software to tell the user if we have this specific article in a bound print volume of the Journal of Doing Things, exactly which of our location(s) that bound volume is located at, and if it’s currently checked out (from the limited collections, such as off-site storage, we allow bound journal checkout).

My software can’t answer this question, because our records are insufficient. Why? Not all of our bound volumes are recorded at all, because when we transitioned to a new ILS over a decade ago, bound volume item records somehow didn’t make it. Even for bound volumes we have — or for summary of holdings information on bib/copy records — the holdings information (what volumes/issues are contained) are entered in one big string by human catalogers. This results in output that is understandable to a human reading it (at least one who can figure out what “v.251(1984:Jan./June)-v.255:no.8(1986)”  means). But while the information is theoretically input according to cataloging standards — changes in practice over the years, varying practice between libraries, human variation and error, lack of validation from the ILS to enforce the standards, and lack of clear guidance from standards in some areas, mean that the information is not recorded in a way that software can clearly and unambiguously understand it.  From https://bibwild.wordpress.com/2015/11/23/linked-data-caution/ the Bibliographic Wilderness blog

Processes that worked for library catalogues or librarians i.e. in this case the description v.251(1984:Jan./June)-v.255:no.8(1986) need translating for a non-librarian or a computer to understand what they mean.

It’s a good and interesting blog post and raises some important questions about why, despite the seemingly large number of identifiers in use in the library world (or maybe because) it is so difficult to pull together metadata and descriptions of material to consolidate versions together.   It’s an issue that causes issues across a range of work we try to do, from discovery systems, where we end up trying to normalise data from different systems to reduce the number of what seem to users to be duplicate entries to work around usage data, where trying to consolidate usage data of a particular article or journal becomes impossible where versions of that article are available from different providers, or from institutional repositories or from different URLs.

Highland cow and calf at InversnaidI’m always looking to find out about the tools and techniques that people are using to improve their websites, and particularly how they go about testing the user experience (or UX) to make sure that they can make steady improvements in their site.

So I’m a regular follower of some of the work that is going on in academic libraries in the US (e.g. Scott Young talking about A/B testing and experiments at Montana , and Matt Reidsma talking about Holistic UX).    It was particularly useful to find out about the process that led to the three headings on the home page of Montana State University library, and the stages that they went through before they settled on Find, Request and Services.Montana State University Library website homepage  A step-by-step description showing the tools and techniques is a really valuable demonstration of how they went about the process and how they made their decision.  It is interesting to me how frequently libraries seem not to pick the right words to describe their services, and don’t pick words that make sense to their users.  But it’s really good to see an approach that largely gets users to decide on what works by testing what works, rather than asking users what they prefer.

Something else that I came across the other week was the term ‘guerilla testing’ applied to testing the usability of websites (I think that probably came from the session on ‘guerilla research’ that Martin Weller and Tony Hirst ran the other week that I caught up with via their blog posts/recording).  That led on to ‘Guerilla testing‘ on the Government Service Design Manual (there’s some sense of slight irony for me about guerilla testing being documented – in some detail – in a design manual) – but the way it talks through the process, its strengths and weaknesses is really useful and it made me think about the contrast between that approach and the fairly careful and deliberate approach that we’ve been talking with our work over  the last couple of months.  Some things to think about.

Reflections on our approach
It’s good to get an illustration from Montana of the value of the A/B testing approach.  It’s a well-evidenced and standard approach to web usability but I know that it is something that we’ve found difficult to use in a live environment as it makes our helpdesk people anxious that they aren’t clear what version of the website customers might be seeing.  So we’ve tended to use step by step iterations rather than straightforward A/B testing.  But something I think to revisit.

The piece of work that we’re concentrating on at the moment is to look at student needs from library search.  We know it’s a pain point for students, we know it’s not ‘like Google’ and isn’t as intuitive as they feel it should be.  So we’re trying to gain a better understanding of what we could do to make it a better experience (and what we shouldn’t do).  So we’re working with a panel of students who want to work with the library to help create better services.

The first round tested half a dozen typical library resource search scenarios against eight different library search tools (some from us and some from elsewhere) with around twenty different users.   We did all our testing as remote 1:1 sessions using Teamviewer software (although you could probably use Skype or a number of alternatives) and were able to record the sessions and have observers/note takers.  We’ve assessed the success rate for each scenario against each tool and also measured the average time it took to complete each task with each tool (or the time before people gave up).  These are giving us a good idea of what works and what doesn’t.

For the second round a series of wireframes were created using Balsamiq to test different variants of search boxes and results pages.  So we ran a further set of tests again with the panel and again remotely.  We’ve now got a pretty good idea of some of the things that look like they will work so have started to prototype a real search tool.  We’re now going to be doing a series of iterative development cycles, testing tools with students and then refining them.  That should greatly improve our understanding of what students want from library search and allow us to experiment with how we can build the features they want.

Jisc elevator website screenshotIt was great to see this week that the latest opportunity on the Jisc Elevator website is one for students to pitch ideas about new technology ideas.  That’s really nice to see something that involves students in coming up with ideas and backing it up with a small amount of money to kickstart things.

Using students as co-designers for library services and in particularly in relation to websites and technology is something that I’m finding more and more compelling.  A lot of the credit for that goes to Matthew Reidsma from Grand Valley State University in US, whose blog ‘Good for whom?‘ is pretty much essential reading if you’re interested in usability and improving the user experience.   I’m starting to see that getting students involved in co-designing services is the next logical step on from usability testing.  So instead of a process where you design a system and then test it on users, you involve them from the start, by asking them what they need, maybe then getting them to feedback on solution designs and specifications and then going through the design process of prototyping, testing and iterating, by getting them to look at every stage.  Something that an agile development methodology particularly lends itself to.  Examples where people have started to employ students on the staff to help with getting that student ‘voice’ are also starting to appear.

There are some examples of fairly recent projects where Universities have been getting students (and others outside the institution) involved in designing services, so for example the Collaborate project at Exeter that looked at using students and employers to design ’employability focussed assessments’.  There is also Leeds Metropolitan with their PC3 project on the personalised curriculum and Manchester Metropolitan’s ‘Supporting Responsive Curricula’ project.    And you can add to that list of examples the Kritikos project at Liverpool that I blogged about recently.

For us, with our focus on websites and improving the user experience we’ve been working with a group of students to help us with designing some tools for a more personalised library experience.  I blogged a bit about it earlier in the year.   We’re now well into that programme of work and have put together a guest blog post for Jisc’s LMS Change project blog ‘Personalisation at the Open University’. Thanks to Ben Showers from Jisc and Helen Harrop from the LMS Change project for getting that published.  Credit for the work on this (and the text for the blog post) should go to my colleagues: Anne Gambles, Kirsty Baker and Keren Mills.  Having identified some key features to build we are well into getting the specification for the work finalised and start building the first few features soon.   It’s been an interesting first foray into working with students as co-designers and one I think has major potential for how we do things in the future.

It has long intrigued me why libraries (or maybe librarians) like to use different words instead of the words that our users would commonly use.  The issue/discharge, check-in/checkout, return/borrow terminology always used to seem to me to be at odds with how users thought of the processes.  In most cases in my experience library users (borrowers, readers, patrons…) would say ‘I want to take this out’ or ‘I want to bring this back’ but I’ve never yet seen any library that uses those words to describe the processes.

And we’ve carried on this process into the web-sphere, as this recent report by John Kupersmith from Berkeley ‘Library Terms that Users Understand’ clearly identifies.   Looking at 51 usability studies he has picked out seveLibrary terms that users 'don't' understandral terms that users simply don’t understand (shown in the image on the right).   Terms like database, periodical,  serial, and resource are included in the list and they are all familiar from usability tests we’ve done ourselves.  Database is one that I always find particularly interesting.  To most people a database is something like Microsoft Access and few people outside libraries would ever consider them to be a collection of library stuff. 

It’s good to see recommenSearching the librarydations about the use of natural language such as ‘Find’ in the report.  That certainly matches what we have found from our own work and we’ve ended up going with ‘Find’ for our search feature on the home page of our website.  Journals, articles and ebooks may not be quite the best terms to use with Find maybe.

I am slightly surprised to find that users aren’t that sure about the term ‘Library catalog’ but maybe I shouldn’t be, as I think that libraries themseleves are maybe slightly confused about what is in a library catalogue these days, in the age of knowledge bases and discovery systems.  Is your catalogue just a list of printed materials, or a list of everything owned or licensed by the library?  I wonder whether  users were  any clearer about what a library catalogue was in the past?

We’ve used a couple of different usability tools at various stages through the library website project.  We’re fortunate in having access to a high level of expertise and advice/guidance from colleagues at the University’s Institute of Educational Technology .  This means that we have access to some advanced usability tools such as eye-tracking software.

We’ve used two different tools.  Firstly, Morae usability software, which we have on a laptop, and is used to track and record mouse movements and audio commentaries.  This is quite portable and allows us to do some small-scale testing.  We most recently used it for some of the search evaluation work.  Its limitation is that although it tracks what people do with the mouse it may be very different to where they are looking on screen.  

At a workshop I was at the other day, people talked about users scanning web-pages in an ‘F’ pattern, so would scan across in two horizontal lines followed by one vertical line on the left.  This implies that they will pick up items in the left hand column and across the top quite easily.   This was something reported by Jakob Nielsen here back in 2006, with some sample heatmap screenshots shown below.

For the latest testing, we’ve been able to use the Tobii eye-tracking system in the IET Usability labs, which as the name suggests track the users eye-movements about the screen and give a much richer indication of how they are interacting with the website.  So you can show where users are looking as a heatmap, as shown in Jakob Nielsen’s example above, or alternatively you can show gaze opacity.  This shows where users are looking in white, with the more opaque the white the more time their gaze is concentrated on that location.  So places that aren’t being viewed show up as black. Website gaze opacity image

The example shown is from the latest library website testing and you can quite clearly see the same sort of  ‘F’ shaped scanning behaviour on one of the sub-pages on the website.  Looking through some of the other pages then it isn’t always quite as clear cut. 

Keren, my colleague on the project team who has been running the usability testing stage is currently going through the images and the transcripts/notes at the moment and will pull out of it some recommendations to modify the website to address any areas that users found difficult to use.  These recommendations then get reviewed by our Website Editorial Group and prioritised as to whether they are high priority and need to be fixed before the site can go live, or of lesser priority and can be resolved over a longer time period.

The value of this sort of testing is quite high as it isn’t really until users actually engage with your website and try to use it in practice that you really find out how well it works.  It is time-consuming, in that there’s some organising to do to find people to take part in the testing (in our case a research panel to go through to get approval and then emails out to students).  It also takes time to write and fine-tune the scripts that will be used for the testing, and then time to carry out the testing and then to analyse the outputs, but that time is well-spent if you want to understand how easy users will find your site to use.

In the last few weeks I’ve taken part in a couple of activities that involved the use of ‘personas’.  If you’ve not come across personas before, then they are a made up person, with a name and a personal history, that represent one of your key client groups.  [There’s some good information about personas here on the usability.gov website].  Personas image from InquiettudesPersonas are a really a useful service design and usability tool as they allow you to visualise your service through the eyes of one of your users.

Typically your persona would have a name, a photograph (because it makes them easier to relate to), and a back-story: educational background; employment status; personal circumstances; aspirations and motivations; for example.  It’s also good to have things such as whether they use social networking and what sorts of things they are interested in.  Generally you’d also want to try to categorise them with a short phrase that makes it easier to discuss them.  Often you’d create a set of sheets or cards with the details of each persona.

In the two exercises I’ve recently taken part in they have been used to look at two very different stages of the website design process.  Firstly as a demonstration of their value in website design and usability, looking quite specifically at a website to see what elements of a particular page were going to be of use to different personas (and also which elements were not going to be relevant).  To use the personas you have to put yourself in their place and look at your website through their eyes.  So what are they looking for, what is their level of experience or knowledge, for example.  It does throw up some really useful insights into how your website is viewed by users.

In the second exercise, the personas were being used at a much earlier stage of the design process, to help look at priorities for the future direction of the Virtual Learning Environment by thinking about what the attitudes of different personas would be to a set of statements about developments.  That was quite a useful exercise as it allows you to think about how your users and potential users will react to or view things you might develop.  Hopefully it would prevent you contemplating developing services or features that wouldn’t be wanted by users. 

Personas have been used for a while to look at both websites and the VLE at the University.  To an extent they have been created around market segments and with students/potential students as the primary focus, but there are plans to develop others and to make personas much more widely used as a design tool.  So although the ones that exist are of use when planning and developing a library website, there are a few missing for our purposes as we’ve also got staff and researchers to consider.

Although it’s now a bit late in our website design process to use personas for the design stages I’m certainly thinking about using them as a tool to check and review the site, and will see whether we can use them much more in future.  A useful tool for website design.

We’re in the middle of a set of usability tests as part of our work on the new library website and my colleague who is running this work suddenly came out with the comment ‘the user is not broken’.  It wasn’t a phrase I’d come across but it seemed to perfectly sum up what was the right attitude to why we do usability testing.

I’m told that the phrase dates back to a meme in 2006 from Karen G. Schneider (there is a copy of her presentation here and a blog post here

The user is not broken.

Your system is broken until proven otherwise.

That vendor who just sold you the million-dollar system because “librarians need to help people” doesn’t have a clue what he’s talking about, and his system is broken, too.

It seems to me to have exactly the right attitude to bear in mind when you are testing your website.   You have to build your website so it can be used by your users and you shouldn’t have to provide them with training to use it.  So if usability testing identifies features that users cannot easily use then those features are broken.  And that is a tough thing to accept.  The standard library approach (and I’m not sure if it is peculiar to libraries and librarians) is that we will provide helpsheets, guidance, training sessions and signs to help users, i.e. we try to ‘fix the user’ as if they were broken.  But if you look at the commercial web world (Apple with their iOS 5 upgrade for example) then they launch their website or software, provide some information about the features, but don’t ever really offer lots of training on how to use it.   Maybe that is a product of extensive testing and confidence in their product but I’m not so sure that that is it.

In part, at least I think there is a matter of scale at play.  If you run a physical library and your users visit your library building then you do have day to day contact with your users, but even so, you don’t talk to every single person who comes into your library, or provide them with individual guidance.  They might see a helpsheet or leaflet, but they are more likely to use your services by trial and error or following what other people do.  With a virtual library you actually talk to a tiny fraction of your user community and can only realistically be able to provide training to a handful of users.  So your systems, websites etc have to work, with a minimum of on-screen guidance.  Have you ever seen a user guide to a cash machine?  No?,  thought not.

Twitter posts

Categories

Calendar

August 2017
M T W T F S S
« Jun    
 123456
78910111213
14151617181920
21222324252627
28293031  

Creative Commons License