You are currently browsing the tag archive for the ‘LIDP’ tag.
A quick trip to Manchester yesterday to take part in a Symposium at ALT-C on ‘Big Data and Learning Analytics’ with colleagues from the OU (Simon Buckingham Shum, Rebecca Ferguson, Naomi Jeffrey and Kevin Mayles) and Sheila MacNeill from JISC CETIS (who has blogged about the session here).
It was the first time I’d been to ALT-C and it was just a flying visit on the last morning of the conference so I didn’t get the full ALT-C experience. But I got the impression of a really big conference, well-organised and with lots of different types of sessions going on. There were 10 sessions taking place at the time we were on, including talks from invited speakers. So lots of choice of what to see.
But we had a good attendance at the session and there seemed a good mix of people and a good debate and questions during the symposium. Trying to both summarise an area like Learning Analytics and also give people an idea of the range of activities that are going on is tricky in a one-hour symposium but hopefully gave enough of an idea of some of the work taking place and some of the issues and concerns that there are.
Cross-over with other areas
Sheila had a slide pointing out the overlaps between the Customer Relationship Management systems world, Business Intelligence and Learning Analytics, and it struck me that there’s also another group in the Activity Data world that crosses over. Much of the work I mentioned (RISE and Huddersfield’s fanstastic work on Library impact) came out of JISC’s Activity Data funding stream and some of the synthesis project work has been ‘synthesised’ into a website ‘Exploiting activity data in the academic environment’ http://www.activitydata.org/ Many of the lessons learnt that are listed here, particularly around what you can do with the data, are equally relevant to Learning Analytics. JISC are also producing an Activity Data report in the near future.
A lot of the questions in the session were as much around the ethics as the practicality. Particularly interesting was the idea that there were risks of Learning Analytics in encouraging a view that so much could be boiled down to a set of statistics, which sounded a bit like norms to me. The sense-making element seems to be really key, as with so much data and statistics work.
I’d talked a bit about also being able to use the data to make recommendations, something we had experimented with in the RISE project. It was interesting to hear views about the dangers of them reducing rather than expanding choice by narrowing the choices as people are encouraged to select from a list of recommendations which reinforces the recommendations leading to a loop. If you are making recommendations based on what people on a course looked at then I’d agree that it is a risk, especially as I think there’s a huge probability that people are often going to be looking at resources that they have to look at for their course anyway.
When it comes to other types of recommendations (such as people looking at this article also viewed this other article, and people searching for this search term look at these items) then there is still some chance of recommendations reinforcing a narrow range of content, but I’d suggest that there is still some chance of serendipitous discovery of material that you might not ordinarily have seen. I’m aware that we’ve very much scratched the surface with recommendations and used simple algorithms that were designed around the idea that the more people who viewed that pattern the better the recommendation. But it may be that more complex algorithms that throw in some ‘randomness’ might be useful.
One of the elements I think that is useful about the concept of recommendations is that people largely accept them (and perhaps expect them) as they are ubiquitous in sites like Amazon. And I wonder if you could almost consider them as a personalisation feature that indicates that your service is modern and up-to-date and is engaging with users. For many library systems that still look to be old-fashioned and ‘librarian’-orientated then perhaps it is equally important to be seen to have these types of features as standard.
Update: Slides from the introductory presentation are here
JISC Activity Data programme and Learning Analytics
A couple of things this week about the activity data projects that JISC funded last year as part of their Information Environment programme. I noticed that Huddersfield are going to be doing some more work on LIDP (the Library Impact Data Project) over the next few months. This phase two includes work on more data sources and a possible data shared service. The screenshot on the left lists the work they are planning to do. More details on their blog. It will be interesting to see how this goes.
On Tuesday this week we did a short lunchtime session for library and other OU staff on the work we did last year on the RISE activity data project. So I did a short presentation on what we did in the project, and Liz (@LizMallett) covered the user evaluation and feedback. We also had a presentation by Will Woods (@willwoods) from IET on the University’s work around Learning Analytics. Learning Analytics has now become an important project for the university and it is interesting to see how this moves forward in the next few months. There is a short blog post on the event on the RISE blog here that includes embedded links to the presentations on RISE.
Moving forward with Activity Data
Since RISE finished we’ve been looking at ways of embedding some of the recommendation ideas into our mainstream services. We’ve still been routinely adding EZProxy data into the RISE database. At the moment we are moving the RISE prototype search interface and the Google gadget across to a new web server as we are closing down the old library website. That should keep the search prototype running for a bit more time. It’s also a chance to tweak the code and sort out any bits that have degraded.
Our website developer (@beersoft) has been building some new features based on the ideas around using activity data. The live library website already displays dynamic lists of resources at a title level in the library resources section on the website http://www8.open.ac.uk/library/library-resources.
One of the prototypes takes the standard resource lists (which are at a title level) and shows the most recently viewed articles from those journals, using the data from the RISE database. The screenshot shows one of the current prototypes. So users would not only see the relevant journal title (with a link at the title level), but would also see the most recently viewed articles from that journal. For users that are logged in it would also be feasible to show the articles viewed by people on their course, or even their own recently viewed articles.
We’ve been starting to think about how best to present these new ideas on the website as we want to gauge user reactions to them Thinking at the moment is that we want to keep them separate from the ‘production’ spec services, so would have them in a separate ‘innovation’ or ‘beta’ space. I quite like the Cornell CULLABS or the Harvard Library Innovation Lab as a model to follow.
I thought I’d cover two quite different things in this blog post but thinking about them there is actually a connection between them in that they are both elements of how users value academic libraries and how that can be seen and measured. One of them was a library seminar presented by Carol Tenopir from the University of Tennessee talking about measuring library outcomes and value. The other, Huddersfield’s new Lemon Tree library learning game, where users get points for carrying out library activities.
This is a new game just launched by Huddersfield at https://library.hud.ac.uk/lemontree/ Created by developers Running in the Halls the library game gives users points for carrying out activities in the library and using library resources. In their words:
There’s also a project blog here with some useful technical details about how the system has been setup. The game lets users login with a facebook login, something that will be really familar to students, and the site itself has the modern, informal look that is a world away from the usual formal, stuffy academic library sites.
It will be interesting to see how the game takes off with users. Huddersfield’s LIDP work seems to have established a connection between library use and student achievement, so it will be really fascinating to see how Lemon Tree might encourage more student use of the library and how it may affect student behaviour.
Whether something like this would work in every academic environment is something I’d wonder. It might appeal more to students who are particularly engaged with social networking. With students becoming ever more focussed on doing what they need to do to get the best out of their investment then they might want to know what they get in return for playing the game. I’m looking forward to hearing more about Lemon Tree.
Carol Tenopir library seminar on the value of academic libraries
It’s always useful to hear about ways of measuring the value of libraries in ways that go beyond usage statistics. So it was really good to hear Carol Tenopir talking about some of the work to come out of her recent projects and particularly from the Lib-Value project.
Carol Tenopir is a Chancellor’s Professor at the School of Information Sciences at the University of Tennessee, Knoxville and the Director of Research for the College of Communication and Information, and Director of the Center for Information and Communication Studies.
Ranging from Fritz Machlup’s definitions of value in terms of purchase or exchange value (what you are willing to pay) and use value (described as ‘favourable consequences derived from reading and using information’) through Economic, Social and Environmental values as used by Bruce Kingma and on to Implicit values (usage), Explicit values (outcomes) and Derived values (Return on Investment) we had a thorough introduction to some of work that is going on in this area.
What was particularly useful was to hear about the Critical Incident technique used in reading and scholarship surveys. In this case academics are asked in detail about the last article they read (the ‘Critical Incident’). These surveys have shown that the majority of the articles are being supplied by the library, but not read in the library. Over half of the academics surveyed said that the outcome of reading the article was ‘New thinking’.
Carol also talked about Return on Investment and particularly contingent valuation, an economic model that tries to calcluate how much it would cost to do the things the library does, if you had to do them yourself. So instead of the library buiying a subscription to that electronic journal, how much would it cost you on a pay-per-view basis. It was particularly useful to find out about the National Network of Libraries of Medicine value calculator (and here).
All-in-all a really good hour with lots of useful techniques and information about different ways of thinking about the value of academic libraries. (A video of this seminar is now available here). For me, what is interesting about these two items is that both covered value as being expressed directly by what users say (the critical incident) or what they do (the Lemon Tree library activities).