You are currently browsing the monthly archive for December 2015.

Photgraph of RobinWe’re in the early stages of our work with library data and I thought I’d write up some reflections on the early stages.  So far we’ve mostly confined ourselves to trying to understand the library data we have and suitable methods to access it and manipulate it.  We’re interested in  aggregations of data, e.g. by week, by month, by resource, in comparison with total student numbers etc.

Ezproxy data
One of our main sources of data is from ezproxy, which we use for both on and off-campus use of online library resources.  Around 85-90% of our authenticated resource access goes through this system.   One of the first things we learnt when we started investigating this data source is that there are two levels of logfile – the full log of all resource requests and the SPU (Starting Point URL) logfile.   The latter tracks the first request to a domain in a session.

We looked at approaches that others had taken to help shape how we approached analysing the data.  Wollongong for example, decided to analyse the time stamp as follows:

  • The day  is divided into 144 10-minute sessions
  • If a student has an entry in the log during a 10-minute period, then 1/6 is added to the sum of that student’s access for that session (or week, in the case of the Marketing Cube).
  • Any further log entries during that student’s 10-minute period are not counted.

Using this logic, UWL measures how long students spent using its electronic resources with a reasonable degree of accuracy due to small time periods (10 minutes) being measured.

Discovering the Impact of Library Use and Student Performance, Cox and Jantti 2012 http://er.educause.edu/articles/2012/7/discovering-the-impact-of-library-use-and-student-performance

To adopt this approach would mean that we’d need to be looking at the full log files to pick up each of the 10 minute sessions.  Unfortunately owing to the size of our version of the full logs we found it wasn’t going to be feasible to use this approach, we’d have to use the SPU version and take a different approach.

Athens data
A small proportion of our resource authentication goes through OpenAthens.   Each month we get a logfile of resource accesses that have been authenticated using this route.   Unlike ezproxy data we don’t get a date/timestamp, all we know is that those resources were accessed during the month.  Against each resource/user combination you get a count of the number of times that combination occurred during the month.

Looking into the data one of the interesting things we’ve been able to identify is that OpenAthens authentication also gets used for other resources not just library resources, so for example we’re using it for some library tools such as RefWorks and Library Search, but it’s straight-forward to take those out if they aren’t wanted in your analysis.

So one of the things we’ve been looking at is how easy it is to add the Athens and Ezproxy data together.   There are similarities between the datasets but some processing is needed to join them up.  The ezproxy data can be aggregated at a monthly level and there are a few resources that we have access to via both routes so those resource names need to be normalised.

The biggest difference between the two datasets is that whereas you get a logfile entry for each SPU access in the ezproxy dataset you get a total per month for each user/resource combination in the OpenAthens data.  One approach we’ve tried is just to duplicate the rows, so where the count says the resource/user combination appeared twice in the month, just copy the line.  In that way the two sets of data are comparable and can be analysed together, so if you wanted to be able do a headcount of users who’ve accessed 1 or more library resources in a month you could include data from both ezproxy and openathens authenticated resources.

Numbers and counts
One thing we’ve found is that users of the data want several different counts of users and data from the library e-resources usage data.  The sorts of questions we’ve had to think about so far include:

  • What percentage of students have accessed a library resource in 2014-15? – (count of students who’ve accessed 1 or more library resources)
  • What percentage of students have accessed library resources for modules starting in 2014? – a different question to the first one as students can be studying more than one module at a time
  • How much use of library resources is made by the different Faculties?
  • How many resources have students accessed – what’s the average per student, per module, per level

Those have raised a few interesting questions, including which student number do you take when calculating means? – the number at the start, at the end, or part-way through?

Next steps
In the New Year we’ve more investigation and more data to tackle and should be able to start to join library data up with data that lets us explore correlations between library use, retention and student success.

 

 

Advertisements

I’m not sure how many people will be familar with the work of Oliver Postgate, and specifically of his stop-motion animation series, The Clangers.  One of the characters in the series is Major Clanger, and he’s an inventor.

Image from Kieran Lamb via https://flic.kr/p/dqthAU

Image Kieran Lamb from https://flic.kr/p/dqthAU

The character always comes to mind to me when thinking about development approaches as an example of a typical approach to user engagement.  So the scene opens with a problem presenting itself.  Major Clanger sees the problem and thinks he has an idea to solve it, so he then disappears off and shuts himself away in his cave.  Cue lots of banging and noises as Major Clanger is busy inventing a device to solve ‘the problem’.  Then comes the great unveilling of the invention, often accompanied by some bemusement from the other Clangers about what the new invention actually is, how it works and what it is supposed to do.  Often the invention seems to turn out to not be quite what was wanted or has unforseen consequences.  And that approach seems to me to characterise how we often ‘do’ development.  We see a problem, we may ask users in a focus group or workshop to define their requirements, but then all too often we go and, like Major Clanger, build the product in complete isolation and then unveil it to users in what we describe as usability testing.  And all too often they go ‘yeh, that’s not quite what we had in mind’ or ‘well that would have been good when we were doing X but now we want something else’.

So how do we break that circle and solve our users problems in a better development style that builds the products that users can and will use?  That’s where I think that a more co-operative model of user engagement comes in.  It starts with a different model of user engagement, where users are involved throughout the requirements, development and testing stages.  And that’s an approach that we’ve started to call ‘co-design‘, and have piloted during our discovery research.

It starts with a Student Panel of students who agree to work with us in activities to improve library services.  We recruit cohorts of a few dozen students with a committment to carry out several activities with us during a defined period.   We outline the activity we are going to undertake and the approach we will take and make sure we have the necessary research/ethical approvals for the work.

For the discovery research we went through three stages:

  1. Requirements gathering – in this case testing a range of library search tools with a series of exercises based on typical user search activities.  This helped to identify the typical features users wanted to see, or did not want to see.  For example, at this stage, we were able to rule out using the ‘bento box’ results approach that has been popular at some other libraries
  2. Feature definition – a stage that allows you to investigate in detail some specific features – in our case we used wireframes of search box options and layouts and tested them with a number of Student panel members – ruling out tabbed search approaches and directing us much more towards a very simple search box without tabs or drop-downs.  This stage lets you test a range of different features without the expense of code development, essentially letting you refine your requirements in more detail.
  3. Development cycles – this step took the form of a sequence of build and test cycles, creating a search interface from scratch using the requirements identified in stages one and two, and then refining it, testing specific new features and discarding or retaining them depending on user reactions.  This involved working with a developer to build the site and then work through a series of development and test ‘sprints, testing features identified either in the early research or arising from each of the cycles.

These steps took us to a viable search interface and built up a pool of evidence that we used to setup and customise Primo Library Search.  That work led to further stages in engagement with users as we went through a fourth stage of usability testing the interface and making further tweaks and adjustments in the light of user reactions.  Importantly it’s an on-going process with a regular cycle of testing with users to continually improve the search tool.  The latest testing is mainly around changes to introduce new corporate branding, but includes other updates that can be made to the setup or the CSS of the site in advance of new branding being applied.

The ‘co-design’ model also fits with a more evolutionary or incremental approach to website development and is a model that usability experts such as Nielsen Norman Group often recommend as users generally want a familiar design rather than a radical redesign.  Continuous improvement systems typically expect incremental improvements as the preferred approach.  Yet the ‘co-design’ model could equally be deployed for a complete site re-design, starting from scratch with a more radical design and structural changes and then using the incremental approach to refine them into a design that meets user needs and overcomes the likely level of resistence by users familar with the old site, by delivering an improved user experience to which users can quickly get comfortable with.

Twitter posts

Categories

Calendar

December 2015
M T W T F S S
« Nov   Jan »
 123456
78910111213
14151617181920
21222324252627
28293031  

Creative Commons License