You are currently browsing the category archive for the ‘Learning Analytics’ category.

Analytics seems to be a major theme of a lot of conferences at the moment.  I’ve been following a couple of library sector conferences this week on twitter (Talis Insight http://go.talis.com/talis-insight-europe-2016-live #talisinsight and the 17th Distance Library Services Conference http://libguides.cmich.edu/dls2016 #dls16) and analytics seems to be a very common theme.

A colleague at the DLS conference tweeted a picture 2016-04-22_1535about the impact of a particular piece of practice and that set us off thinking, did we have that data?, did we have examples of where we’d done something similar?    The good thing now is that I think rather than thinking ‘it would be good if we could do something like that’, we’ve a bit more confidence – if we get the examples and the data, we know we can do the analyses, but we also know we ‘should’ be doing the analyses as a matter of course.

It was also good to see that other colleagues (@DrBartRienties) at the university were presenting some of the University’s learning analytics work at Talis Insight. Being at a university that is undertaking a lot of academic work on learning analytics is both really helpful when you’re trying to look at library analytics but also provides a valuable source of advice and guidance in some of our explorations.

[As an aside, and having spent much of my library career in public libraries, I’m not sure how much academic librarians realise the value of being able to talk to academics in universities, to hear their talks, discuss their research or get their advice.  In a lot of cases you’re able to talk with world-class researchers doing ground-breaking work and shaping the world around us.]

 

Advertisements

Photgraph of RobinWe’re in the early stages of our work with library data and I thought I’d write up some reflections on the early stages.  So far we’ve mostly confined ourselves to trying to understand the library data we have and suitable methods to access it and manipulate it.  We’re interested in  aggregations of data, e.g. by week, by month, by resource, in comparison with total student numbers etc.

Ezproxy data
One of our main sources of data is from ezproxy, which we use for both on and off-campus use of online library resources.  Around 85-90% of our authenticated resource access goes through this system.   One of the first things we learnt when we started investigating this data source is that there are two levels of logfile – the full log of all resource requests and the SPU (Starting Point URL) logfile.   The latter tracks the first request to a domain in a session.

We looked at approaches that others had taken to help shape how we approached analysing the data.  Wollongong for example, decided to analyse the time stamp as follows:

  • The day  is divided into 144 10-minute sessions
  • If a student has an entry in the log during a 10-minute period, then 1/6 is added to the sum of that student’s access for that session (or week, in the case of the Marketing Cube).
  • Any further log entries during that student’s 10-minute period are not counted.

Using this logic, UWL measures how long students spent using its electronic resources with a reasonable degree of accuracy due to small time periods (10 minutes) being measured.

Discovering the Impact of Library Use and Student Performance, Cox and Jantti 2012 http://er.educause.edu/articles/2012/7/discovering-the-impact-of-library-use-and-student-performance

To adopt this approach would mean that we’d need to be looking at the full log files to pick up each of the 10 minute sessions.  Unfortunately owing to the size of our version of the full logs we found it wasn’t going to be feasible to use this approach, we’d have to use the SPU version and take a different approach.

Athens data
A small proportion of our resource authentication goes through OpenAthens.   Each month we get a logfile of resource accesses that have been authenticated using this route.   Unlike ezproxy data we don’t get a date/timestamp, all we know is that those resources were accessed during the month.  Against each resource/user combination you get a count of the number of times that combination occurred during the month.

Looking into the data one of the interesting things we’ve been able to identify is that OpenAthens authentication also gets used for other resources not just library resources, so for example we’re using it for some library tools such as RefWorks and Library Search, but it’s straight-forward to take those out if they aren’t wanted in your analysis.

So one of the things we’ve been looking at is how easy it is to add the Athens and Ezproxy data together.   There are similarities between the datasets but some processing is needed to join them up.  The ezproxy data can be aggregated at a monthly level and there are a few resources that we have access to via both routes so those resource names need to be normalised.

The biggest difference between the two datasets is that whereas you get a logfile entry for each SPU access in the ezproxy dataset you get a total per month for each user/resource combination in the OpenAthens data.  One approach we’ve tried is just to duplicate the rows, so where the count says the resource/user combination appeared twice in the month, just copy the line.  In that way the two sets of data are comparable and can be analysed together, so if you wanted to be able do a headcount of users who’ve accessed 1 or more library resources in a month you could include data from both ezproxy and openathens authenticated resources.

Numbers and counts
One thing we’ve found is that users of the data want several different counts of users and data from the library e-resources usage data.  The sorts of questions we’ve had to think about so far include:

  • What percentage of students have accessed a library resource in 2014-15? – (count of students who’ve accessed 1 or more library resources)
  • What percentage of students have accessed library resources for modules starting in 2014? – a different question to the first one as students can be studying more than one module at a time
  • How much use of library resources is made by the different Faculties?
  • How many resources have students accessed – what’s the average per student, per module, per level

Those have raised a few interesting questions, including which student number do you take when calculating means? – the number at the start, at the end, or part-way through?

Next steps
In the New Year we’ve more investigation and more data to tackle and should be able to start to join library data up with data that lets us explore correlations between library use, retention and student success.

 

 

We’ve been running Primo as our new Library Search discovery system since the end of April so it’s been ‘live’ for just over four months.  Although it’s been a quieter time of year over the summer I thought it would be interesting to start to see what the analytics are saying about how Library Search is being used.

Internal click-throughs
Some analytics are provided by the supplier in the form of click-through statistics and there are some interesting figures that come out of those.  The majority of searches are ‘Basic searches’, some 85%.  Only about 11% of searches use Advanced search.  Advanced search isn’t offered against the Library Search box embedded into the home page of the library website but is offered next to the search box on the results page and on any subsequent search.  It’s probably slightly less than I might have expected as it seemed to be fairly frequently mentioned as being used regularly on our previous search tool.

About 17% of searches lead to users refining their search using the facets.  Refining the search using facets is something we are encouraging users to do, so that’s a figure we might want to see going up.  Interestingly only 13% navigated to the next page in a set of search results using the forward arrow, suggesting that users overwhelmingly expect to see what they want on the first page of results. (I’ve a slight suspicion about this figure as the interface presents links to pages 2-5 as well as the arrow – which goes to pages 6 onwards –  and I wonder if pages 2-5 are taken into account in the click-through figure).

Very few searches (0.5% of searches) led users to use the bX recommendations, despite this being in a prominent place on the page.  The ‘Did you mean’ prompt also seemed to have been used in 1% of searches.  The bookshelf feature ‘add to e-shelf’is used in about 2% of searches.

Web analytics
Browsers used pie chartLooking at web analytics shows that Chrome is the most popular browser, followed by Internet Exploer, Safari, and Firefox.

75% of traffic comes from Windows computers with 15% from Macintoshes.  There’s a similar amount of traffic from tablets to what we see on our main library website, with tablet traffic running at about 6.6% but mobile traffic is a bit lower at just under 4%.

Overall impressions
Devices using library search seem pretty much in line with traffic to other library websites.  There’s less mobile phone use but possibly that is because Primo isn’t particularly well-optimised for mobile devices and also maybe something to test with users whether they are all that interested in searching library discovery systems through mobile phones.

I’m not so surprised that basic search is used much more than advanced search.  It matches the expectations from the student research of a ‘google-like’ simple search box.  The data seems to suggest that users expect to find results that are relevant on page one and not go much further, something again to test with users ‘Are they getting what they want’.  Perhaps I’m not too surprised that the ‘recommender’ suggestions are not being used but it implies that having them at the top of the page might be taking up important space that could be used for something more useful to users.  Some interesting pointers about things to follow up in research and testing with users.

 

At the end of November I was at a different sort of conference to the ones I normally get to attend.  This one, Design4learning was held at the OU in Milton Keynes, but was a more general education conference.  Described as “The Conference aims to advance the understanding and application of blended learning, design4learning and learning analytics ” Design4learning covered topics such as MOOCs, elearning, learning design and learning analytics.

There were a useful series of presentations at the conference and several of them are available from the conference website.   We’d put together a poster for the conference talking about the work we’ve started to do in the library on ‘library analytics’ – entitled ‘Learning Analytics – exploring the value of library data and it was good to talk to a few non-library people about the wealth of data that libraries capture and how that can contribute to the institutional picture of learning analyticPoster for Design4learning conferences.

Our poster covered some of the exploration that we’ve been doing, mainly with online resource usage from our EZProxy logfiles.  In some cases we’ve been able to join that data with demographic and other data from surveys to start to look in a very small way at patterns of online library use.

Design4learning conference poster v3

The poster also highlighted the range of data that libraries capture and the sorts of questions that could be asked and potentially answered.  It also flagged up the leading-edge work by projects such as Huddersfield’s Library Impact Data Project and the work of the Jisc Lamp project.

An interesting conference and an opportunity to talk with a different group of people about the potential of library data.

Photograph of office buildings at Holborn Circus

Holborn Circus – I was struck by the different angles of the buildings

Themes

For me two big themes came to mind after this year’s Future of Technology in Education Conference (FOTE). Firstly, around creativity, innovation and co-creation; and secondly about how fundamental data and analytics is becoming.

Creativity, innovation and co-creation

Several of the speakers talked about innovation and creativity.  Dave Coplin talked of the value of Minecraft and Project Spark and the need to create space for creativity, while Bethany Koby showed us examples of some of the maker kits ‘Technology Will Save Us’ are creating.

Others talked of ‘flipping the classroom’ and learning from students as well as co-creation and it was interesting in the Tech start-up pitchfest that a lot of the ideas were student-created tools, some working in the area of collaborative learning.

Data and analytics

The second big trend for me was about analytics and data.  I was particularly interested to see how many of the tools and apps being pitched at the conference had an underlying layer of analytics.  Evaloop which was working in the area of student feedback, Knodium – a space for student collaboration, Reframed.tv – offering interaction and sharing tools for video content, Unitu – an issues tracking tool and MyCQs – a learning tool, all seemed to make extensive use of data and analytics, while Fluency included teaching analytics skills.  It is interesting to see how many app developers have learnt the lessons of Amazon and Google of the value of the underlying data.

Final thoughts and what didn’t come up at the conference

I didn’t hear the acronymn MOOC at all – slightly surprising as it was certainly a big theme of last year’s conference.  Has the MOOC bubble passed? or is it just embedded into the mainstream of education?  Similarly Learning Analytics (as a specific theme).  Certainly analytics and data was mentioned (as I’ve noted above) but of Learning Analytics – not a mention, maybe it’s embedded into HE practice now?

Final thoughts on FOTE.  A different focus to previous years but still with some really good sessions and the usual parallel social media back-channels full of interesting conversations. Given that most people arrived with at least one mobile device, power sockets to recharge them were in rather short supply.

To Birmingham today for the second meeting of the Jisc LAMP (library analytics and metrics project) community advisory and planning group. This is a short Jisc-managed project that is working to build a prototype dashboard tool that should allow benchmarking and statistical significance tests on a range of library analytics data.

The LAMP project blog at http://jisclamp.mimas.ac.uk is a good place to start to get up to speed with the work that LAMP is doing and I’m sure that there will be an update on the blog soon to cover some of the things that we discussed during the day.

One of the things that I always find useful about these types of activity, beyond the specific discussions and knowledge sharing about the project and the opportunity to talk to other people working in the sector, is that there is invariably some tool or technique that gets used in the project or programme meetings that you can take away and use more widely. I think I’ve blogged before about the Harvard Elevator pitch from a previous Jisc programme meeting.

This time we were taken through an approach of carrying out a review of the project a couple of years hence, where you had to imagine that the project had failed totally. It hadn’t delivered anything that was useful, so no product, tool or learning came out of the project. It was a complete failure.

We were then asked to try to think about reasons why the project had failed to deliver. So we spent half an hour or so individually writing reasons onto post-it notes. At the end of that time we went round the room reading out the ideas and matching them with similar post-it notes, with Ben and Andy sticking them to a wall and arranging them in groups based on similarity.

It quickly shifted away from going round formally to more of a collective sharing of ideas but that was good and the technique really seemed to be pretty effective at capturing challenges. So we had challenges grouped around technology and data, political and community aspects, and legal aspects for example.

We then spent a bit of time reviewing and recategorising the post-it notes into categories that people were reasonably happy with. Then came the challenge of going through each of the groups of ideas and working out what, if anything, the project could or should do to minimise the risk of that possible outcome happening. That was a really interesting exercise to identify some actions that could be done in the project such as engagement to encourage more take up.

A really interesting demonstration of quite a powerful technique that’s going to be pretty useful for many project settings. It seemed to be a really good way of trying to think about potential hurdles for a project and went beyond what you might normally try to do when thinking about risks, issues and engagement.

It’s interesting to me how so many of the good project management techniques work on the basis of working backwards. Whether that is about writing tasks for a One Page Project Plan based on describing the task as if has been completed, e.g. Site launch completed, or whether it is about working backwards from an end state to plan out the steps and the timescale you will have to go through. These both envisage what a successful project looks like, while the pre-mortem thinks about what might go wrong. Useful technique.

Encouraged by some thinking about what sort of prototype resource usage tools we want to build to test with users in a forthcoming ‘New tools’ section I’ve been starting to think about what sort of features you could offer to library users to let them take advantage of library data.

Early steps
For a few months we’ve been offering users of our mobile search interface (which just does a search of our EBSCO discovery system) a list of their recently viewed items and their recent searches. The idea behind testing it on a mobile device Mobile search results screenwas that giving people a link to their recent searches or items viewed would make it easier for people to get back to things that they had accessed on their mobile device by just clicking single links rather than having to bookmark them or type in fiddly links. At the moment the tool just lists the resources and searches you’ve done through the mobile interface.

But our next step is to make a similar tool available through our main library website as a prototype of the ‘articles I’ve viewed’. And that’s where we start to wonder about whether the mobile version of the searches/results should be kept separate from the rest of your activities, or whether user expectations would be that, like a Kindle ebook that you can sync across multiple devices, your searches and activity should be consistent across all platforms?

At the moment our desktop version has all your viewed articles, regardless of the platform you used. But users might want to know in future which device they used to access the material maybe? Perhaps because some material isn’t easily accessible through a mobile device. But that opens up another question, in that the mobile version and the desktop version may be different URLs so you might want them to be pulled together as one resource with automatic detection of your device when you go to access the resource. Articles I've read screenshot

Next steps
With the data about what resources are being accessed and what library web pages are being accessed it starts to open up the possibility of some more user-centred use of library activity and analytics data.

So you could conceive of being able to match that there is a spike of users accessing the Athens problems FAQ page and be able to tie that to users trying to access Athens-authenticated resources. Being able to match activity with students being on a particular module could allow you to push automatically some more targeted help material, maybe into the VLE website for relevant modules, as well as flag up an indication of a potential issue to the technical and helpdesk teams.

You could also contemplate mining reading lists and course schedules to predict when there are particular activities that are scheduled and automatically schedule pushing relevant help and support or online tutorials to students. Some of the most interesting areas seem to me to be around building skills and using activity (or lack of activity) to trigger promotion of targeted skills building activities. So knowing that students on module X should be doing an activity that involves looking at this set of resources, and being able to detect the students that haven’t accessed those resources, offering them some specific help material, or even contact from a librarian. Realistically those sorts of interventions simply couldn’t be managed manually and would have to rely on some form of learning analytics-type trigger system.

One of the areas that would be useful to look at would be some form of student dashboard for library engagement. So this could give students some data about what engagement they have had with the library, e.g. resources accessed, library skills completed, library badges gained, library visits, books/ebooks borrowed etc. Maybe set against averages for their course, and perhaps with some metrics about what high-achieving students on their course last time did. Add to that a bookmarking feature, lists of recent searches and resources used, with lists of loans/holds. Finished off with useful library contacts and some suggested activities that might help them with their course based on what is know about the level of library skills needed in the course.

Before you can do some of the more sophisticated learning analytics-type activities I suspect it would be necessary is to have a better understanding of the impact that library activities/skills/resources have on student retention and achievement. And that seems to me to argue for some really detailed work to understand library impact at a ‘pedagogic’ level.

I’d been thinking early this morning about writing up a blog post around some thoughts about ‘Library Analytics’ and thinking that it was interesting how ‘Library Analytics’ had been used by Harvard for their ‘Library analytics toolkit’ and by others as a way of talking about web analytics, but that neither really seemed to me to quite be analagous to the way that the Learning Analytics community, such as Solar,  view analytics.  There are several definitions about Learning Analytics.  This one from Educause’s 7 things you should know about first-generation learning analytics:

Learning analytics (LA) applies the model of analytics to the specific goal of improving learning outcomes. LA collects and analyzes the “digital breadcrumbs” that students leave as they interact with various computer systems to look for correlations between those activities and learning outcomes. The type of data gathered varies by institution and by application, but in general it includes information about the frequency with which students access online materials or the results of assessments from student exercises and activities conducted online. Learning analytics tools can track far more data than an instructor can alone, and at their best, LA applications can identify factors that are unexpectedly associated with student learning and course completion.

Much of the library interest in analytics seems to me to have mainly been about using activity data to understand user behaviour and make service improvements, but I’m increasingly of the view that whilst that is important, it is only half the story.  One of the areas that interests me about both learning analytics and activity data, is the empowering potential of that data as a tool for the user, rather than the lecturer or librarian, to find out interesting things about their behaviour, or get suggested actions or activities, and essentially to be able to make better choices.  And that seems to be the key – just as reviews and ratings are helping people being informed consumers, with sites like Trip Advisor then we should be building library systems that help our users to be informed library consumers.

So it was great to see the announcement of the JiscLAMP project this morning http://infteam.jiscinvolve.org/wp/2013/02/01/jisc-lamp-shedding-light-on-library-data-and-metrics/ announcing the Library Analytics and Metrics project and talking about delivering a prototype shared library analytics service for UK academic libraries.  I was particularly interested to see that the plan is to develop some use-cases for the data and great that Ben Showers shared some of the vision behind the idea.   It’s a great first step to put data on a solid, consistent and sustainable basis, and should build a good platform to be able to exploit that vast reservoir of library data.

FOTE 2012 app screenshot“When did someone from Amazon last come round to your door and say sorry, we’ve changed the interface, would you like some training?” (Dave Coplin from Bing, at FOTE 2012).

I’ve blogged before about the idea that you shouldn’t have to give your users training for them to be able to use your website, so it was quite interesting to hear someone from a large IT company like Bing say pretty much the same thing at FOTE the other week.  And Dave Coplin’s presentation is worth catching up with on the FOTE mediasite (link at the bottom of this blog post).

It was my second time at FOTE and last time one of my reflections was on the amount of effort they had put into getting android and iOS apps for the conference.  So there was a similar set of apps this year, in green rather than yellow and it was certainly good to have everything together in a nice neat app.  One thing though I did notice was that the attendance list in the app was a bit sparse with names.  Not quite sure why but presumably people had to opt-in to have their names included.  In some ways that was a shame as it made it difficult to find out who was there – I only realised that someone who works in the same building as me was at the conference when they asked a question from the audience.  Although a lot of the networking at conferences these days takes place on social networks, mainly twitter and Google Plus, while the conference is taking place, it’s still good to have access to a list of delegates.

Learning Analytics
The first presentation by Cailean Hargrave from IBM talked largely about their work in the area of Learning Analytics, using an example from FE. It was really interesting to see a fully worked through example of the power and reach of learning analytics.  To see the tool being used to drive a portal for staff, students and employers, throughout the student journey was fascinating.  To see examples of how it could be used to make suggestions to students on what they might do to improve their grades I think was really eye-opening and really touched on some of the potentially scary elements of Learning Analytics.  It goes a long way beyond recommendations into areas where you are trying to shape particular behaviours and touches on some of the ethical issues that have been raised about learning analytics.

Research Data
I was also really interested to hear about Figshare a cloud-based respository for researchers data, that plays into the whole open research data agenda, mentioning the recent Royal Society ‘Science as an open enterprise‘ paper and the push by funders towards open access of research data.  The model for the system seems to be supported through a tie-up with an academic publisher and it will be really interesting to see whether this is a sustainable model.  It’s certainly another alternative for researchers and at a time when many institutions are still gearing themselves up to deliver research data management systems is an interesting alternative solution.

FOTE
For a short one-day conference FOTE packed in a wide range of content, from ipads in learning, through game-based learning, to ebooks and a debate on the hot topic of ‘MOOCs’ Massively Open Online Courses.  Some good things to take away from the day.

Presentations from FOTE are all available from:

http://events.mediasite.com/Mediasite/Catalog/Full/9186666a02e542ea9d840c37bfaa19e321

Twitter posts

Categories

Calendar

October 2017
M T W T F S S
« Sep    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Creative Commons License