data.path Ryoji.Ikeda - 3 by r2hox https://www.flickr.com/photos/rh2ox/9990016123/in/photolist-gdMrKi-8Lsgfn-f6XbVk-4heE23-6GiYkV-9u6D1T-5fdsZZ-ax8z4B-9rnTdL-2KfFof-5Ed1xw-22CjS2-gdMuhT-58q7ff-7GaoYw-bf2wtK-688e4k-bf2wpa-i3NECz-5NfuQU-9uDrJR-i3N6NQ-bvtCC4-FMVAH-gdLJnk-ekYLQy-hn7FA2-6EMnyQ-9fLRVv-cS7EZW-dEMUWx-duzsNr-7n3FjS-mWf6or-pVKYKn-nMui2q-fY9CgE-i3NEbc-i3NEP6-92kp1F-7PEwp1-5YiALt-bf2wDV-h4RV3y-5daC49-6ZY5Yw-7R6usE-5BK5qi-ybPYi-dvbgmy

data.path Ryoji.Ikeda – 3 by r2hox
https://flic.kr/p/gdMrKi

One of the pieces of work we’re just starting off in the team this year is to do some in-depth work on library data.  In the past we’ve looked at activity data and how it can be used for personalised services (e.g. to build recommendations in the RISE project or more recently to support the OpenTree system), but in the last year we’ve been turning our attention to what the data can start to tell us about library use.

There have been a couple of activities that we’ve undertaken so far.  We’ve provided some data to an institutional Learning Analytics project on the breakdown of library use of online resources for a dozen or so target modules.  We’ve been able to take data from the EZproxy logfiles, and show the breakdown by student ID, by week and by resource over the nine-month life of the different modules.  That has put library data alongside other data such as use of the Virtual Learning Environment and allowed module teams to  look at how library use might relate to the other data.

Pattern of week by week library use of eresources - first level science course

Pattern of week by week library use of eresources – first level science course

A colleague has also been able to make use of some data combining library use and satisfaction survey data for a small number of modules, to shed a little light on whether satisfied students were making more use of the library than unsatisfied ones (obviously not a causal relationship – but initial indications seem to be that for some modules there does seem to be a pattern there).

Library Analytics roadmap
But these have been really early exploratory steps, so during last year we started to plan out a Library Analytics Roadmap to scope out the range of work we need to do.  This covers not just data analysis, but also some infrastructural developments to help with improving access to data and some effort to build skills in the library.  It is backed up with engagement with our institutional learning analytics projects and some work to articulate a strategy around library analytics.  The idea being that the roadmap activities will help us change how we approach data, so we have the necessary skills and processes to be able to provide evidence of how library use relates to vital aspects such as student retention and achievement.

Library data project
We’re working on a definition of Library analytics as being about:

Using data about student engagement with library services and content to help institutions and students understand and improve library services to learners

Part of the roadmap activity this year is to start to carry out a more systematic investigation into library data, to match it against student achievement and retention data.  The aim is to build an evidence base of case studies, based on quantitative data and some qualitative work we hope to do.  Ideally we’d like to be able to follow the paths mapped out by the likes of Minnesota, Wollongong and Huddersfield in their various projects and demonstrate that there is a correlation between library use, student success and retention.

Challenges to address
We know that we’re going to need more data analysis skills, and some expertise from a statistician.  We also have some challenges because of the nature of our institution.  We won’t have library management system book loans, or details of visits to the library, we will mainly have to concentrate on use of online resources.  So in some ways that simplifies things.  But our model of study also throws up some challenges.  With a traditional campus institution students study a degree over three or four years.  There is a cohort of students that follow through year 1, 2, 3 etc and at the end of that period they do their exams and get their degree classification.  So it is relatively straight-forward to see retention as being about students that return in year 2 and year 3, or don’t drop-out during the year, and to see success measured as their final degree classification.  But with part-time distance learning, where although students sign up to a qualification, they still follow a pattern of modules and many will take longer than six years to complete, often with one of more ‘breaks’ in study, following a cohort across modules might be difficult.  So we might have to concentrate on analysis at the ‘module’ level… but then that raises another question for us.  Our students could be studying more than one module at a time so how do you easily know whether their library use relates to module A or module B?  Lots of things to think about as we get into the detail.

Advertisements