You are currently browsing the tag archive for the ‘University of Huddersfield’ tag.
I thought I’d cover two quite different things in this blog post but thinking about them there is actually a connection between them in that they are both elements of how users value academic libraries and how that can be seen and measured. One of them was a library seminar presented by Carol Tenopir from the University of Tennessee talking about measuring library outcomes and value. The other, Huddersfield’s new Lemon Tree library learning game, where users get points for carrying out library activities.
This is a new game just launched by Huddersfield at https://library.hud.ac.uk/lemontree/ Created by developers Running in the Halls the library game gives users points for carrying out activities in the library and using library resources. In their words:
There’s also a project blog here with some useful technical details about how the system has been setup. The game lets users login with a facebook login, something that will be really familar to students, and the site itself has the modern, informal look that is a world away from the usual formal, stuffy academic library sites.
It will be interesting to see how the game takes off with users. Huddersfield’s LIDP work seems to have established a connection between library use and student achievement, so it will be really fascinating to see how Lemon Tree might encourage more student use of the library and how it may affect student behaviour.
Whether something like this would work in every academic environment is something I’d wonder. It might appeal more to students who are particularly engaged with social networking. With students becoming ever more focussed on doing what they need to do to get the best out of their investment then they might want to know what they get in return for playing the game. I’m looking forward to hearing more about Lemon Tree.
Carol Tenopir library seminar on the value of academic libraries
It’s always useful to hear about ways of measuring the value of libraries in ways that go beyond usage statistics. So it was really good to hear Carol Tenopir talking about some of the work to come out of her recent projects and particularly from the Lib-Value project.
Carol Tenopir is a Chancellor’s Professor at the School of Information Sciences at the University of Tennessee, Knoxville and the Director of Research for the College of Communication and Information, and Director of the Center for Information and Communication Studies.
Ranging from Fritz Machlup’s definitions of value in terms of purchase or exchange value (what you are willing to pay) and use value (described as ‘favourable consequences derived from reading and using information’) through Economic, Social and Environmental values as used by Bruce Kingma and on to Implicit values (usage), Explicit values (outcomes) and Derived values (Return on Investment) we had a thorough introduction to some of work that is going on in this area.
What was particularly useful was to hear about the Critical Incident technique used in reading and scholarship surveys. In this case academics are asked in detail about the last article they read (the ‘Critical Incident’). These surveys have shown that the majority of the articles are being supplied by the library, but not read in the library. Over half of the academics surveyed said that the outcome of reading the article was ‘New thinking’.
Carol also talked about Return on Investment and particularly contingent valuation, an economic model that tries to calcluate how much it would cost to do the things the library does, if you had to do them yourself. So instead of the library buiying a subscription to that electronic journal, how much would it cost you on a pay-per-view basis. It was particularly useful to find out about the National Network of Libraries of Medicine value calculator (and here).
All-in-all a really good hour with lots of useful techniques and information about different ways of thinking about the value of academic libraries. (A video of this seminar is now available here). For me, what is interesting about these two items is that both covered value as being expressed directly by what users say (the critical incident) or what they do (the Lemon Tree library activities).
What do you want a user to do as a result of your recommendation?
If you are offering recommendations to users then you may have some specific outcomes that you want to achieve. On Amazon the recommendations ‘people who bought this also bought that’ would firmly seem to be aiming to increase sales. I’d wonder with Amazon whether it also broadens or narrows the range of titles that are sold. Does it encourage customers to buy items they wouldn’t normally have considered? I’m sure that is true, but is it reinforcing ‘bestseller’ lists by encouraging customers to buy the same items other people have bought or is it encouraging them to buy items from backlists. Is it exploiting the ‘long-tail’ of books that are available?
There’s evidence from Huddersfield that adding recommendations to the catalogue increased the numbers of titles being borrowed. Reported here in Dave Pattern’s blog. His slides have an interesting chart (reproduced on the left) showing how the range of titles borrowed increased. So that is clearly impacting on the long-tail of stock within a library. The SALT Project with John Rylands and MIMAS is specifically looking at how recommendations might encourage humanities researchers to exploit underused materials in catalogues. SALT, like RISE, is being funded as part of the JISC Activity Data strand.
With the RISE project we are working with a narrow set of data in that the recommendations database will only contain entries for articles that have been accessed already. So there is less direct opportunity to exploit the long-tail of articles by showing them as recommendations. But our interface will be using the discovery service search so users see EDS results directly from that service alongside our recommendations, so there will be some potential broadening of the recommendations in the database.
One other aspect about recommendations that has come up is the extent to which they may be time-dependent for HE libraries. Talking through some stuff about RISE with Tony Hirst (his blog is at ouseful.info) the other week and he challenged us to think about when recommendations will be useful to a student.
We build and run our courses in a linear fashion, so students go step by step through their studies doing assignment x on subject y and looking at resources z. Then they move on to the next piece of coursework. So with recommendations reflecting what happened in the past there’s a danger that the articles students on my course have been looking at all relate to last weeks assignment and not this weeks.
So that introduces a time element. A student may be interested in what students looked at the last time the assignment was set, which may have been a year ago (for the Open University where some modules run twice a year and some run yearly it might even be a different time period from course to course). So that implies that you might want to introduce a time element into your recommendation algorithm. This would need to check the current date and relate it to the course start date, then use that data and relate it to the last time that the course was run. We discussed that you would need to factor in a window either side to cope with the spread of time that students might be working on an assignment. At the moment for us it’s a moot point as our data only goes back to the end of 2010 so we can’t make those sorts of recommendations anyway. But it’s certainly something that needs to be considered.
(the blog post title owes a lot to Alan Sillitoe’s story and film of the same name ‘the Loneliness of the long-distance runner’)
“Every day I wake up and ask, ‘how can I flow data better, manage data better, analyse data better?”
Rollin Ford, the CIO of Wal-Mart
Quoted in A special report on managing information: Data, data everywhere
Economist, The (London, England) – February 27, 2010 Page: 71
Libraries and their attitude to user activity data.
In the commercial world there are countless examples of how the private sector uses the data about their customers, from Wal-Mart’s CIO (quoted above) through to supermarkets use of loyalty cards and to the recommendations that are commonplace in websites such as Amazon. But examples of libraries use of this type of data are still quite rare and libraries have been very slow to take advantage of the vast pool of data they have about the behaviour of their users. Libraries have long been used to using systems to count how many item have been borrowed or bought, but have been strangely reluctant to look in detail about what people are borrowing and use that data to help users make better informed choices.
Some work has been done through the TILE and MOSAIC projects, and the latter included anonymised circulation data made available by Huddersfield University and used to run a competition to encourage ideas around the use of that data. JISC also ran an event earlier in the year about this area ‘Gaining Business Intelligence from User Activity Data’ which has been written up here and in the ALT newsletter. Dave Pattern at Huddersfield is probably furthest along in working with this area and his blog is a good source for ideas about what can be achieved with user activity data.
Following on from the event in the Summer JISC have clearly been thinking about how to increase the pool of examples of how user activity data can be used so have included it as one of the strands in their recently announced Funding Call 15/10. With £500k available for 7/10 six month projects to take place in the early part of 2011, there’s the opportunity for libraries to get involved in developing new ideas about how to use user activity data.
User Activity Data is a particularly interesting area for me as a good deal of the work that has been done so far has been around the use of loan data. Working in a library where students don’t borrow books from us, or even visit the library, we’ve got to look at other areas of data. Most of our users engage with us through using our e-resources and that’s an area that we are looking to see how we can collect, analyse, and use that data to improve services and offer recommendations to help users get more out of their e-resource usage.