You are currently browsing the tag archive for the ‘Institute of Educational Technology’ tag.

Beadnell wadersFor a few months now we’ve been running a project to look at student needs from library search.  The idea behind the research is that  we know that students find library search tools to be difficult compared with Google, we know it’s a pain point.  But actually we don’t know in very much detail what it is about those tools that students find difficult, what features they really want to see in a library search tool, and what they don’t want.   So we’ve set about trying to understand more about their needs.  In this blog post I’m going to run through the approach that we are taking.  (In a later blog post hopefully I can cover some detail of the things that we are learning.)

Approach
Our overall approach is that we want to work alongside students (something that we’ve done before in our personalisation research) in a model that draws a lot of inspiration from a co-design approach. Instead of building something and then usability testing it with students at the end we want to involve students at a much earlier stage in the process so for example they can help to draw up the functional specification.

We’re fortunate in having a pool of 350 or so students who agreed to work with us for a few months on a student panel.  That means that we can invite students from the panel to take part in research or give us feedback on a small number of different activities.  Students don’t have to take part in a particular activity but being part of the panel means that they are generally pre-disposed to working with us.  So we’re getting a really good take-up of our invitations – I think that so far we had more than 30 students involved at various stages, so it gives us a good breadth of opinions from students studying  different subjects, at different study levels and with different skills and knowledge.

We’ve split the research into three different stagesDiscovery research stages: an initial stage that looked at different search scenarios and different tools; a second stage that drew out of the first phase some general features and tried them on students, then a third phase that creates a new search tool and then undertakes an iterative cycle of develop, test, develop, test and so on.  The diagram shows the sequence of the process.

The overall direction of the project is that we should have a better idea of student needs to inform the decisions we make about Discovery, about the search tools we might build or how we might setup the tools we use.

As with any research activities with students we worked with our student ethics panel to design the testing sessions and get approval for the research to take place.

Phase One
We identified six typical scenarios – (finding an article from a reference,  finding a newspaper article from a reference, searching for information on a particular subject, searching for articles on a particular topic, finding an ebook from a reference and finding the Oxford English Dictionary).   All the scenarios were drawn from activities that we ask students to do, so used the actual subjects and references that they are asked to find.  We identified eight different search tools to use in the testing  – our existing One stop search, the mobile search interface that we created during the MACON project, a beta search tool that we have on our library website, four different versions of search tools from other Universities and Google Scholar.  The tools had a mix of tabbed search, radio buttons, bento-box-style search results, chosen to introduce students to different approaches to search.

Because we are a distance learning institution, students aren’t on campus, so we set up a series of online interviews.  We were fortunate to be able to make use of the usability labs at our Institute of Educational Technology and used Teamviewer software for the online interviews.  In total we ran 18 separate sessions, with each one testing 3 scenarios in 3 different tools.  This gave us a good range of different students testing different scenarios on each of the tools.

Sessions were recorded and notes were taken so we were able to pick up on specific comments and feedback.  We also measured success rate and time taken to complete the task.  The features that students used were also recorded.  The research allowed us to see which tools students found easiest to use, which features they liked and used, and which tools didn’t work for certain scenarios.

Phase two
For the second phase we chose to concentrate on testing very specific elements of the search experience.  So for example, we looked at radio buttons and drop-down lists, and whether they should be for Author/Title/Keyword or Article/Journal title/library catalogue.  We also looked at the layout of results screens, and the display of facets, to ask students how they wanted to see date facets presented for example.Discovery search mockup

We wanted to carry out this research with some very plain wireframes to test individual features without the distraction of website designs confusing the picture.  We tend to use a wireframing tool called Balsamiq to create our wireframes rapidly and we ran through another sequence of testing, this time with a total of 9 students in a series of online interviews, again using teamviewer.

By using wireframing you can quickly create several versions of a search box or results page and put them in front of users.  It’s a good way of being able to narrow down the features that it is worth taking through to full-scale prototyping.  It’s much quicker than coding the feature and once you’ve identified the features that you want your developer to build you have a ready-made wireframe to act as a guide for the layout and features that need to be created.

Phase three
The last phase is our prototype building phase and involves taking all the research and distilling that into a set of functional requirements for our project developer to create.  In some of our projects we’ve shared the specification with students so they can agree which features they wanted to see, but with this project we had a good idea from the first two phases what features they wanted to see in a baseline search tool, so missed out that stage.  We did, however, split the functional requirements into two stages: a baseline set of requirements for the search box and the results; and then a section to capture the iterative requirements that would arise during the prototyping stage.  We aimed for a rolling-cycle of build and test although in practice we’ve setup sessions for when students are available and then gone with the latest version each time – getting students to test and refine the features and identify new features to build and test.  New features get identified and added to what is essential a product backlog (in scrum methodology/terminology).  A weekly team meeting prioritises the task for the developer to work on and we go through a rolling cycle of develop/test.

Reflections on the process
The process seems to have worked quite well.  We’ve had really good engagement from students and really good feedback that is helping us to tease out what features we need to have in any library search tool.  We’re about half way through phase three and are aiming to finish off the research for the end of July.  Our aim is to get the search tool up as a beta tool on the library website as the next step so a wider group of users can trial it.

We’ve used a couple of different usability tools at various stages through the library website project.  We’re fortunate in having access to a high level of expertise and advice/guidance from colleagues at the University’s Institute of Educational Technology .  This means that we have access to some advanced usability tools such as eye-tracking software.

We’ve used two different tools.  Firstly, Morae usability software, which we have on a laptop, and is used to track and record mouse movements and audio commentaries.  This is quite portable and allows us to do some small-scale testing.  We most recently used it for some of the search evaluation work.  Its limitation is that although it tracks what people do with the mouse it may be very different to where they are looking on screen.  

At a workshop I was at the other day, people talked about users scanning web-pages in an ‘F’ pattern, so would scan across in two horizontal lines followed by one vertical line on the left.  This implies that they will pick up items in the left hand column and across the top quite easily.   This was something reported by Jakob Nielsen here back in 2006, with some sample heatmap screenshots shown below.

For the latest testing, we’ve been able to use the Tobii eye-tracking system in the IET Usability labs, which as the name suggests track the users eye-movements about the screen and give a much richer indication of how they are interacting with the website.  So you can show where users are looking as a heatmap, as shown in Jakob Nielsen’s example above, or alternatively you can show gaze opacity.  This shows where users are looking in white, with the more opaque the white the more time their gaze is concentrated on that location.  So places that aren’t being viewed show up as black. Website gaze opacity image

The example shown is from the latest library website testing and you can quite clearly see the same sort of  ‘F’ shaped scanning behaviour on one of the sub-pages on the website.  Looking through some of the other pages then it isn’t always quite as clear cut. 

Keren, my colleague on the project team who has been running the usability testing stage is currently going through the images and the transcripts/notes at the moment and will pull out of it some recommendations to modify the website to address any areas that users found difficult to use.  These recommendations then get reviewed by our Website Editorial Group and prioritised as to whether they are high priority and need to be fixed before the site can go live, or of lesser priority and can be resolved over a longer time period.

The value of this sort of testing is quite high as it isn’t really until users actually engage with your website and try to use it in practice that you really find out how well it works.  It is time-consuming, in that there’s some organising to do to find people to take part in the testing (in our case a research panel to go through to get approval and then emails out to students).  It also takes time to write and fine-tune the scripts that will be used for the testing, and then time to carry out the testing and then to analyse the outputs, but that time is well-spent if you want to understand how easy users will find your site to use.

Categories

Calendar

May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Creative Commons License