You are currently browsing the monthly archive for November 2011.
I thought I’d cover two quite different things in this blog post but thinking about them there is actually a connection between them in that they are both elements of how users value academic libraries and how that can be seen and measured. One of them was a library seminar presented by Carol Tenopir from the University of Tennessee talking about measuring library outcomes and value. The other, Huddersfield’s new Lemon Tree library learning game, where users get points for carrying out library activities.
This is a new game just launched by Huddersfield at https://library.hud.ac.uk/lemontree/ Created by developers Running in the Halls the library game gives users points for carrying out activities in the library and using library resources. In their words:
There’s also a project blog here with some useful technical details about how the system has been setup. The game lets users login with a facebook login, something that will be really familar to students, and the site itself has the modern, informal look that is a world away from the usual formal, stuffy academic library sites.
It will be interesting to see how the game takes off with users. Huddersfield’s LIDP work seems to have established a connection between library use and student achievement, so it will be really fascinating to see how Lemon Tree might encourage more student use of the library and how it may affect student behaviour.
Whether something like this would work in every academic environment is something I’d wonder. It might appeal more to students who are particularly engaged with social networking. With students becoming ever more focussed on doing what they need to do to get the best out of their investment then they might want to know what they get in return for playing the game. I’m looking forward to hearing more about Lemon Tree.
Carol Tenopir library seminar on the value of academic libraries
It’s always useful to hear about ways of measuring the value of libraries in ways that go beyond usage statistics. So it was really good to hear Carol Tenopir talking about some of the work to come out of her recent projects and particularly from the Lib-Value project.
Carol Tenopir is a Chancellor’s Professor at the School of Information Sciences at the University of Tennessee, Knoxville and the Director of Research for the College of Communication and Information, and Director of the Center for Information and Communication Studies.
Ranging from Fritz Machlup’s definitions of value in terms of purchase or exchange value (what you are willing to pay) and use value (described as ‘favourable consequences derived from reading and using information’) through Economic, Social and Environmental values as used by Bruce Kingma and on to Implicit values (usage), Explicit values (outcomes) and Derived values (Return on Investment) we had a thorough introduction to some of work that is going on in this area.
What was particularly useful was to hear about the Critical Incident technique used in reading and scholarship surveys. In this case academics are asked in detail about the last article they read (the ‘Critical Incident’). These surveys have shown that the majority of the articles are being supplied by the library, but not read in the library. Over half of the academics surveyed said that the outcome of reading the article was ‘New thinking’.
Carol also talked about Return on Investment and particularly contingent valuation, an economic model that tries to calcluate how much it would cost to do the things the library does, if you had to do them yourself. So instead of the library buiying a subscription to that electronic journal, how much would it cost you on a pay-per-view basis. It was particularly useful to find out about the National Network of Libraries of Medicine value calculator (and here).
All-in-all a really good hour with lots of useful techniques and information about different ways of thinking about the value of academic libraries. (A video of this seminar is now available here). For me, what is interesting about these two items is that both covered value as being expressed directly by what users say (the critical incident) or what they do (the Lemon Tree library activities).
Copper and fibre-optic cables
There have been a few news reports recently about increasing numbers of thefts of copper wire (see this report from the BBC for example). The trigger seems to be that the price of copper has increased substantially over recent years. Although this chart from Metalprices suggests that they went down in 2007/08 and then back up substantially. But what struck me about it was that much of our domestic voice and data network is presumably based around miles and miles of copper cable. So if there’s that much value in copper cabling that thieves are ripping up chunks of it for the scrap metal value then wouldn’t rolling out fibre optic cables to the home and taking up the copper cables pay for itself? Just a thought. I’m sure there are all sorts of reasons why not, ranging from the cost of the fibre optic cables, to the cost of doing the new cabling, and the new infrastructure that would be needed. But how high would the cost of copper have to go before there is more value in the cables in the ground than it would cost to replace them all with fibre?
Small screens/large screens
I was at a presentation during the week listening to a talk about the University’s work on optimising the virtual learning environment and other websites and systems for mobile learners. [Liveblogged here by Doug Clow]. Much of the effort is around getting the various websites to work in an optimal fashion on small screen devices. The growth of use of these devices is phenomenal and accelerating with traffic levels this year three times that seen last year. That’s something that we are very aware of and although our new drupal website is already mobile-friendly we have been working on a new version of our mobile library website that should be out soon.
What interested me particularly is that a couple of years ago there was very little traffic from tablet devices but now there is significant amounts of traffic from them (slightly obscured by iOS traffic for both iphones and ipads being lumped in together in many of the statistics). One of the comments that was made was that tablet users expect to get a near ‘desktop’ experience but many websites still treat them as if they were a touchscreen phone, so many functions won’t actually work. Yet the experience of the two devices is quite different and the expectation of users also seems to me to be different. We’re certainly starting to look at what our websites look like to a variety of tablet users on android and iOS, as well as the usual list of different phones that we now routinely test.
And then yesterday morning listening to the radio there was an advert for a Smart TV with built-in internet access. Which made me think that actually the range of devices you’ve got to be designing for goes from the smallest of mobile phone handsets, through tablets, to netbooks, laptops and desktops, right up to large screen domestic TVs. But the user experience and expectations are surely going to be completely different on those different levels of device. Are you likely to be wanting to do exactly the same on each device? We’ve already taken the view that mobile phone users probably want different things (based on a research report from one of my colleagues) so are building cut-down versions of the main library website. But if you’ve got a much larger screen display does the opposite apply? Will Smart TV users want a different version of your website, with more interactivity, more multimedia? Mmm must get my order in for a big new Smart TV then, for website testing purposes you understand.
For a new project we’ve just started we have been exploring using a set of agile methodologies. This is to see if we can find a more flexible method of building systems than our standard approach of trying to write a comprehensive set of user requirements, functional specifications and technical specifications to cover the whole of a new system.
From some of the projects that we’ve done in the past we’ve recognised that there can be a risk that requirements will change through the project. You can end up building something that at the start of the project, seemed to be exactly what everyone wanted, but by the time the project is well-advanced, you have realised that requirements have moved on. This either leads to projects delivering something that no one really wants, or ending up with massive scope-creep and you enter a never-ending battle to keep pace with an ever-growing list of new features. Agile development seeks to find a way out of that maze.
SCRUM and User Stories
One example of an agile methodology is SCRUM, a technique seen in software development where development phases are referred to as sprints. So a sprint is a development activity taking place over a relatively short period of time with well-defined and potentially quite narrow objectives. One of the techniques often used to define the user requirements is something called ‘User Stories’
A User Story is essentially as statement along the lines of:
As a … (user)
I want to… (something)
In order to … (benefit)
The process you follow is to get your user or users to write out a series of user stories that cover the new system that they want you to build. These user stories can encompass a range of different requirements, functional or technical for example. Once your users have written their user stories you then take them and group them together into similar features or functions. You can choose whether you get users to prioritise them when they write the original user stories or you can do the prioritisation with the user representative once they are put together and sorted. The idea is that at the end of the process you have a priority list that you can use to identify what development you should undertake as part of the first sprint.
Thoughts so far
It’s the first time we’ve tried this particular method and it takes a bit of getting used to. Writing the right sort of user stories is not as straightforward as we’d expected. They need to be really tightly focused on what users want to do with the system, not too aspirational, and there really needs to be a boundary or scope to what they are writing User Stories about. It also seems quite easy to miss out features of the system that are going to be needed but haven’t been mentioned in the User Stories. But we are learning more as we go along and realise that as we progress we will create new User Stories to fill the gaps. That seems to be one of the key aspects of agile.
So the first update for iOS5 seems to be out, iOS 5.0.1, and it seemed a good chance to try out how the updates work now you don’t have to do them through being connected up to itunes on the PC.
The first sign that there was an update available was a message on screen and also a red icon on the corner of the Settings icon (the same as you see against the Apps icon whenever there is an update. When you go into settings there’s a 1 against General and then against the Software Update setting under General. Running the update seems pretty straightforward. The screen goes blank and you get an Apple logo and a progress bar, once that completes it goes blank again then you get another Apple logo and progress bar. Once completed it takes you to the ‘unlock’ screen.
All in all a pretty painless operation that took about five minutes. It’s good that you don’t have to link it up to the PC but I do wonder about how you are supposed to back-out of it if it goes wrong. How do you restore it?, from icloud somehow?
Tools for managing the project
Mention it quietly but we’ve gone through our whole new library website project without using a particular bit of software, beloved (or be-loathed?, not sure if that is a real word) of project managers the world over, namely Microsoft Project. There’s no doubt that it’s a powerful and flexible tool and used with skill it can be a great help in planning complex projects. But that flexibility comes at the price of being immensely time-consuming to get everything setup and with several people on the project who hadn’t used MS Project before, we decided early on that we would just use the excel gantt chart plug-in that I’d blogged about earlier in the year.
So we’ve now gone through the whole project without using MS Project and haven’t really missed it particularly. We’ve been able to record the tasks that we need to, and keep track of when activities are taking place. Although using the gantt chart template means that there is more manual tweaking of the gantt chart elements I’m finding that it is preferrable to having to undo the automatic things that Project has done (or at least trying to work out what setting to change to stop it automatically trying to show things in a particular way.)
The big advantage is that people don’t need MS Project to be able to work on the file. Everyone has got Excel. You don’t have to export the charts in a particular format so people can view them, everyone can view the charts through Excel. As we’ve also started using Zoho Projects for another project that we’re now doing, it may be that we won’t use MS Project much in the future.
Tracking project progress
We’ve also used Excel for a simple visualisation to show project progress. Taking the list of tasks for each workpackage we’ve just used a simple 100% horizontal bar chart to show the progress for each workpackage. So completed tasks show as green, those in progress as yellow, and those to be started as red. It gives a simple visual clue about the progress towards completing all the tasks in each workpackage. It meant that by displaying the chart and then talking through the detail we were able to provide a simple update of the project progress.
Obviously there are some limitations to this approach in that it only looks at the number of tasks and doesn’t take account of the amount of resource needed or used. But it would be possible just to graph those elements quite easily.
The final tool we’ve been using is Mantis Bug Tracker. Unsurprisingly we’ve used this to keep track of all the issues we need to fix on the new website. Although the tool certainly has a lot more features than we need to use there is a good workflow/feedback process and it has worked reasonably well for us. Within the web team we’ve used it extensively and tried to encourage as many library staff as possible to log things directly into the system and not worry too much if they can’t fill out every section as long as they put in enough to identify the problem.
The feedback processes and the way you can add notes and comments seems to work really well. By the end we had a grand total of 170 bugs recorded in the system. There are just a handful still outstanding now.
One of the things I noticed when going to the very first event I went to after joining an academic library was the remarkable amount of connectivity that is needed by people attending the event. Almost everyone had a laptop or netbook, and was taking notes, checking their email or browsing the web. So it makes wifi connectivity at conferences and events a really critical factor. As I’ve been to more events at a number of different institutional venues over the last couple of years I’ve started to notice that there are quite a few different approaches that are being taken. And that seems interesting as I’m presuming that most institutions are users of JANET and therefore would need to comply with the same Acceptable Usage Policy but institutionally come up with different solutions.
So far I’ve come across five different types:
- Type A
Offer a separate named conference wifi service and provide you with the wifi access key, either in the conference/seminar materials or on posters/flipcharts etc. All participants are using the same network and access key.
- Type B
Have a separate wifi service for each meeting room and make the access key available at the meeting. All participants in that meeting use the same network and access key.
- Type C
Offer Eduroam and nothing else. You can only get network access if you are from another HE institution and have already set up Eduroam on your device.
- Type D
Use the standard institutional wifi network but provide, on request, an individual wifi access code, often by preprinting them on a sheet with details of the acceptable usage policy. Attendees have an individual access code but no record is kept of who has which code.
- Type E
Provide a specific conference wifi network. Attendees can sign up for an individual wifi access key that generally works for the day.
I’m sure there may be other models out there but I find it is interesting that there is so much institutional variation for something that is vital for any venue that hosts meetings with people from outside that institution. And with institutions perhaps looking at conferences and seminars as a way of drawing in some extra income, then having a robust, reliable and easy-to-use wifi network for guests at your event is pretty much essential.
On my way to work this morning on a typical grey and murky November morning it seemed to me that I tend to associate particular months with distinct colours. The murky, foggy November days seeming to characterise the month as grey.
So if November is grey, then December is black, with a few bright lights shining in the dark, with dark mornings and evenings and the lights of christmas decorations and markets. [I’ve just realised that my twitter picture is exactly that with a picture of the bright lights of a fairground wheel at night – actually taken in December on Princes Street in Edinburgh] .
January I think of as white, with frost and possible snow, maybe slightly grey-tinged and slushy. February seems to be a brighter white to me, of snowdrops and more snow, while March brings to mind the ecclesiastical purple of crocuses with a dash of bright saffron.
As the year moves on in April the bright yellow of daffodils seems to be the colour that to me represents the real signs of the warmer weather to come . Which leads into May, a bright, fresh yellowy-green, of new shoots and crispness. June, somehow brings to mind a sound of skylarks and songbirds rather than a colour and then into July, with blue skies and blue seas, a bright piercing colour.
August seems to me to be orange, a bright firey golden colour of burning sun and abundant flowers, while September a paler yellow turning into a darker red as leaves turn to their autumn shades. For October what comes to mind is brown, of trees bare of leaves, of mud, of colder, damper times.
A strange set of reflections maybe but something to think about over my lunchbreak. A break from thinking about websites and funding bids etc whilst eating lunch.
Our new library website went live earlier this week at www.open.ac.uk/library after many months of planning, development and testing. The site itself is largely built using Drupal on the OU’s standard drupal infrastructure which uses a restricted range of drupal modules to make it easier to support and keep up to date. The object of the exercise has mainly been to move away from the old library website technology of Cold Fusion as that is being phased out. So although we’ve taken the opportunity to update a few features and introduce a more modern and corporate standard look and feel, the main aim has been to change to a more modern platform.
The guiding principles have been to try to address issues raised by users in surveys and feedback. So we’ve been trying to improve the navigation, simplify the search features and make our help and support offer more obvious through the website. So our helpdesk email, webchat and phone contact numbers are in a block on every page, we’re also showing online training sessions prominently on the home page as users have commented that they couldn’t find them easily.
We’ve also introduced a new ‘How Do I’ feature using the drupal Similar Terms module. This shows links to other help pages in the website that are relevant to the page that you are on. So it uses metadata within the page to match with metadata on other help-type pages and then shows a list of relevant pages.
The other main feature that is a bit different to before is that we’ve extended the dynamically-generated content to most of the Library resources pages. This uses ajax to show lists of selected resources from a back-end MySQL database. The website developer has been able to update the way the subject lists (Selected resources for your study) function, so instead of having to pick from one drop-down list, then a second drop-down list, then click a radio button to see resources, there is a more modern approach with a menu and filter buttons.
One of the limitations we have is that we have a lot of links to the website and course activities in Virtual Learning Environment courses, so we’ve had to keep the old website running while we get all those updated. It does have the advantage of making the changeover a bit less fraught as both sites are still running, but does change how you have to aproach redirection as you can’t redirect everyone away from the old site.
We’ll be assessing the feedback over the next few weeks and working out what further evaluation and testing we need to do. There will also be a whole list of thngs to do to plan the closedown of the old site, add further features to the site and prepare for the next set of challenges.
Being off the beaten track last week without a mobile phone or network signal I missed the debate on twitter and then on various blogs about Kindles and libraries. Catching up this week on the debate particularly with the blog posts by Ian Clark and Simon Barron it struck me as interesting that both identified the ease of use of the Amazon device and the ethical dimension of a ‘locked-in’, proprietary format and potentially monopolistic model as key aspects, despite having different views on whether to buy or not to buy a Kindle.
As someone who has bought a Kindle, uses it regularly and who wasn’t persuaded to buy an ebook reader until Kindles became established, then I’m probably one who has been convinced that the convenience of Kindles, particularly the ability to read your ebooks on a range of different devices outweighs reservations about the proprietary format. It isn’t just the quality of the ebook reading experience that sells the Kindle but it is the infrastructure that can deliver your content to whatever device you choose (and then to any of your other devices as well) with a minimum of effort. And that convenience hides the dimension that your purchases are effectively stored ‘in the cloud’ and under the control of Amazon, that you licence those books, rather than own them, and are going to be pretty much exclusively buying your books from Amazon.
What does strike me is that it is perfectly possible with tools like Calibre to convert ebooks into a format that can be used by your Kindle (or other ebook readers) so you aren’t in theory locked into having to buy from Amazon. WH Smith/Kobo suggest you can load them through PDF format as suggested here http://www.whsmith.co.uk/support/HelpeBookFAQs.aspx#FAQ2.1. But there are likely to be DRM (Digital Rights Management) restrictions to prevent you converting ebooks bought in epub format into Amazon’s AZW format or vice versa. [It’s something I need to check out to see how practical it is to try to get ebooks from Waterstones or WH Smith/Kobo onto the Kindle]. And that makes me wonder a little bit whether Amazon restricting themselves to a proprietary format now other ebook providers are starting to get going with their own delivery mechanisms is actually going to mean that Amazon miss out on the potential of people buying ebooks from Amazon to load onto Kobo or other ebook readers. And if Amazon see themselves as a content provider, then wouldn’t they want to sell their content as widely as possible?
Amazon Lending Library
Having just caught up with the debate on Kindles and libraries it was ironic to see this week’s announcement of the Amazon Kindle Owners Lending Library. The immediate reaction seemed to be that it was an potential threat to libraries, but I’m not so sure. The deal (on available in the US initially) is that it is available to people who sign up to the Amazon Prime subscription service at a cost of $79 a year, you can borrow one free book a month, from a list of ‘over’ 5,000 (which implies to me more than 5,000 but less than the next significant number) and includes “100 current and former New York Times Bestsellers”. Hm… so that’s at least 4,900 that haven’t ever been in the New York Times Bestsellers list? So maybe not many well-known titles. It will be interesting to see if it gets introduced in the UK and what the list of titles might be. But unless the list grows significantly the Amazon Owners Lending Library sounds much more like a small incentive to sign up to Amazon Prime. Now 5,000 titles might be small, and it might not be packed full of bestsellers but if you look at the public library offering of ebooks through services like Overdrive then 2,500 titles isn’t unusual for a public library ebook and audiobook download service.
So if you’re in the US you can use Overdrive to borrow from the public library and read them on your Kindle or borrow them through the new Amazon Kindle Owners Lending Library. It will be interesting to see if there is any impact in US public libraries and if these features will get introduced into the UK at any stage.