You are currently browsing the tag archive for the ‘libraries’ tag.
I’ve been wondering about ebooks and libraries for a while, in particular about where things are going in terms of library use of ebooks. What caught my eye this week was a blog post on the Publishers’ Weekly blog here by Peter Brantley about Penguin pulling their ebooks out of the Overdrive system. The bit that particularly caught my attention was this statement:
I am very sympathetic to the sobering prognosis that in the longer run there’s not much future for libraries in providing access to ebooks. If for no other reason, it is likely that ebooks will evolve into a great variety of objects, some of which are widely distributed on the net and not neatly packaged; many others will be enhanced into proprietary versions that will only work on a single platform.
The thing that particularly interested me in the quote was that there is the assumption that these are insurmountable ‘technical’ issues that would stop libraries from lending ebooks. And I don’t know that any library would consider that to be the case. I doubt that many people would suggest that the currrent format of ebooks is in any way a finished article, I’m sure that they will change and evolve over time. But whatever the format, libraries will still see their role as trying to connect the user with the content.
The platform issue puzzles me slightly. I think there are interesting parallels with the early days of video, where VHS won out over Betamax, and in more recent time when BluRay came through. If you turn and look at computer games, then the different platforms still continue to co-exist, and many libraries lend selections of material in different formats. I’m not sure that it is the platform that is the issue with ebooks. Yes some of the formats and platforms may die, in the way that music cassettes largely disappeared as CD took over. But the issue seems to me more to do with the publishers and platform providers positioning themselves for competitive advantage and not wanting to open their content up to a readership through libraries.
If you set aside the format issue for a moment and you look at the model that academic libraries have been able to take with providing access to ebooks. Then we see them providing access to ebooks from different publisher collections with direct links to the ebooks on publishers websites, sometimes with ebook metadata added into the library catalogue or knowledge base to provide direct access. Now I know that public libraries largely don’t have the infrastructure to provide remote access to collections of electronic material in this way so they have tended to go with a single aggregator. But it seems to me that building and mantaining an infrastructure to let public libraries continue to provide access to ebooks, either as a collaborative shared service or as a commercial service (such as Overdrive) isn’t a particular issue.
Where format is an issue, is in terms of how the end-user uses the content. While the ebook publication model is currently based mostly, it seems to me, on trying to lock users into a proprietary platform, it seems to me that we will see changes to that model over time. Maybe the number of platforms will shrink, maybe the formats will start to move towards a standard, or one platform/format become dominant. So if ebook publishers or aggregators want to make their material available through libraries, then there isn’t insurmountable ‘technical’ issues to stop that happening.
Which seems to leave the argument being about whether publishers and aggregators want their ebook content available through libraries. And it seems to me that the reasons why publishers might want their content to be lent through libraries are exactly the same reasons why printed books are lent through libraries, it encourages reading, it encourages literacy and surveys suggest that library readers are also heavy purchasers of books. So if you want to get people into the habit of reading, using and buying ebooks, especially when you are building a new market, wouldn’t you want to use all means to encourage people to try out ebooks?
Most websites go through a process of continued development and change. I’d argue that they are never really finished, it’s just that every so often they get to the stage where you want to start again from a clean (or clean’ish) slate. Often the motivation is as much a feeling of we can’t do what we want to do with the site now, mixed with there’s other sites out there that look better, have better navigation or features for example. I’d also say that there’s an element of website ‘fashion’ that make sites look out-of-date even when they are perfectly functional but that it can drive user perceptions of a site. So refreshing a site at regular intervals is something I’d want to do. In between regular site redevelopments there will always be a myriad of minor or even major changes to the site and it is one of the challenges of website management to keep track of (and manage) these changes. To do that we’ve developed a process around a Website Improvement Plan.
The Website Improvement Plan
In essence the Website Improvement Plan is a document that outlines the steps you are going to take to address issues with the site and it works to shape the development of the site over a period of time. We only use the plan to record significant changes on the site – so we wouldn’t record minor content changes but would record an activity like adding a completely new section of content with several pages. There is quite a variation in the scope of the activities – so it can range from a task to implement a new search system, or to add a rolling news feature to the home page, for example. We use the plan as a rolling document across each year updating it as we go along. We have an approval process to get new items into the plan and we record the following information about each item:
- a unique reference number for each item
- a description of the item
- the date it was added to the plan
- details of the resources needed to deliver the item and a note if it forms part of a project
- the owner of the item – the person responsible for the item
- the date the change is required
- a category – we split tasks into categories such as Content; Infrsastructure; User Experience etc
- details of when the item was approved
- a status column – updated monthly that is used to track progress of the task
We then colour-code each item to make it easy to check which items are at which stage. So pending items are unhighlighted, approved but not yet started (yellow highlight), in progress (green), completed (grey) as seen below
Pros and Cons of this approach
The processes around approving items to go into the plan and reviewing them (done monthly at a Web Quality Improvement Group meeting) help to focus people’s attention on the work taking place around the website and let people identify where there are obvious gaps in work that is planned. We feed in activities from various projects and strategic plans to take care of the higher-level activities. It’s an open document so it is available to all library staff to view and it’s reported to various library management groups. As a tool to bring together everything we are doing it seems to work.
On the negative side – there is some added bureaucracy around managing the plan and some potential delays in making sure that people have signed off developments before they take place. There is a danger that you can end up managing the plan rather than the service.
We keep version control over the document so we can compare different versions of it. This means that we can track the number of tasks that are completed or in progress and how that changes across the year as a form of fairly crude metric around website development. Within the unit we also have activity data from staff that we can use to identify the cost of website services. For us the process seems to work reasonably OK for day-to-day website management.
I was intrigued earlier this week to find that when asked to report back on a few things that were going on – that the response was an expression of concern by a couple of people that they hadn’t heard about some of these things. As I recall it was the information that http://data.open.ac.uk had released course materials as linked data. That information came out via twitter from various people associated with the Lucero project late on Friday. Just as the Open Bibliographic Data JISC website came out in the same way (via tweets from people, often Re-tweeted) late last week.
I must admit to being surprised at the reaction, and thinking about it there was a subtext of why isn’t this information being released properly, through proper channels. But the more I think about it, and think about how I use twitter, which is particularly to find out what is going on in universities, and university libraries and JISC and the general HE domain, then it’s increasingly the way that I find out about what is going on. People I follow tweet links to new things they think will be interesting often because it interested them, or tweet what they are working on or have done. Twitter gives me that information much more so than emails or mailing lists these days, and I don’t use facebook (which I guess I’ve pigeon-holed as being personal rather than professional), and although I’m on LinkedIn that’s more ‘professional’ than informational.
But as these types of activities move to micro-blogging there are people who are put off twitter because of the ‘what I had for lunch’ tweets (and worse). But I’ve always thought that to be a strange approach – like saying that I’m not going to watch TV ever because ITV broadcasts ‘X Factor’, or never use a phone because people use it to talk about trivial things. Twitter is a communications tool used by humans – so all human life is there – just as there is everywhere else. If it works for you fine, if not that’s also fine – but if you want to know what’s happening now and is important to people who are doing interesting things, then it’s a very useful tool
“Every day I wake up and ask, ‘how can I flow data better, manage data better, analyse data better?”
Rollin Ford, the CIO of Wal-Mart
Quoted in A special report on managing information: Data, data everywhere
Economist, The (London, England) – February 27, 2010 Page: 71
Libraries and their attitude to user activity data.
In the commercial world there are countless examples of how the private sector uses the data about their customers, from Wal-Mart’s CIO (quoted above) through to supermarkets use of loyalty cards and to the recommendations that are commonplace in websites such as Amazon. But examples of libraries use of this type of data are still quite rare and libraries have been very slow to take advantage of the vast pool of data they have about the behaviour of their users. Libraries have long been used to using systems to count how many item have been borrowed or bought, but have been strangely reluctant to look in detail about what people are borrowing and use that data to help users make better informed choices.
Some work has been done through the TILE and MOSAIC projects, and the latter included anonymised circulation data made available by Huddersfield University and used to run a competition to encourage ideas around the use of that data. JISC also ran an event earlier in the year about this area ‘Gaining Business Intelligence from User Activity Data’ which has been written up here and in the ALT newsletter. Dave Pattern at Huddersfield is probably furthest along in working with this area and his blog is a good source for ideas about what can be achieved with user activity data.
Following on from the event in the Summer JISC have clearly been thinking about how to increase the pool of examples of how user activity data can be used so have included it as one of the strands in their recently announced Funding Call 15/10. With £500k available for 7/10 six month projects to take place in the early part of 2011, there’s the opportunity for libraries to get involved in developing new ideas about how to use user activity data.
User Activity Data is a particularly interesting area for me as a good deal of the work that has been done so far has been around the use of loan data. Working in a library where students don’t borrow books from us, or even visit the library, we’ve got to look at other areas of data. Most of our users engage with us through using our e-resources and that’s an area that we are looking to see how we can collect, analyse, and use that data to improve services and offer recommendations to help users get more out of their e-resource usage.
Two things this week made me think about how the HE library landscape might be changing in the next few years. One, was the SCONUL Shared Services event, the other was Lorcan Dempsey’s keynote presentation from the emtacl conference in Norway that is now on the web (http://www.ntnu.no/ub/emtacl/?programme).
The connection between the two was that both saw a potential future for libraries in the networked environment where shared services had a part to play. One of the comments that is sometimes made in relation to shared services in HE is that things are different in HE because HE institutions are competing with each other. I’d question how much of the institution’s competitive edge the library represents and I’d argue that the distinctive elements are more likely to be in their customer service, expertise or collection quality rather than their systems.
Amongst his remarks on the network effect and how much that will lead to changes Lorcan pointed to examples from the commercial world where companies like Netflix buy-in services from Amazon who are a competitor, because of Amazon’s expertise, which represent the best available. In the library world Lorcan saw much duplication of activities around libraries with “redundant, complex systems apparatus will have to be simplified” with “a move to shared infrastructure in the cloud”
The SCONUL Shared Services event covered a model for how Shared Services might work for HE libraries. The day was spent talking about the Shared Services study Business Case that was submitted to Hefce towards the end of 2009 requesting pathfinder funding. (http://helibtech.com/file/view/091204%20SCONUL%20Shared%20Service%20-%20for%20distribution.pdf)
In brief the report proposed a three phase set of developments:
1. the creation of a national-level managed ERM system that would manage national level subscriptions to resources and a national ERM licensing service – rather than each HE library having their own ERM processes
2. the creation of a Discovery to Delivery service that would combine the national ERM entitlement data with authentication and search services at a national level
3. removing duplication with library management systems, using the national-level authentication infrastructure and inter-operating with institutional systems
The expectation is that savings would be made by reducing duplication in licensing and rights management, by saving on the cost of e-licence deals by negotiating national subscriptions (going beyond the opt-in model) – so individual HE libraries would do less rights/licensing work, there would be savings in the cost of local ERM systems, licensing staff and Search/LMS costs/LMS support staff.
Unfortunately Hefce have not approved the request for pathfinder money so SCONUL are looking at what options there are to move this forward. There was the feeling at the meeting that the ERM element could be an achievable step, although a lot of detail still needs to be sorted out. The suggestion was made that progress could be made if enough HEIs were prepared to contribute a small amount to getting it off the ground. The general view was that we should be doing this, but probably more realistically not as a single large project but as a step by step process. JISC and SCONUL are keen to move it on but it isn’t all that clear how it might be funded.
Thinking about the proposals there are a few things that strike me about it:
- I wonder about the realism of the timetable – mainly in relation to whether this can happen quickly enough given the likely scenario of major cuts in funding for the sector.
- I must admit to a slight sense of déjà-vu – having sat through a lot of the MLA Stock Services review in public libraries – and seen that go from proposals for shared services to something that just ran into the ground – then I’m interested to see how the HE library sector tackles something like this and whether it has any more success with instituting such a major network-scale change.
- Others are a lot more qualified to comment on the technical practicality of some of the developments but I find it strange that while there are several examples of shared library management systems (LLC, SELMS for example), there are few in HE these days.
It will be interesting to see what happens over the next few months and what opportunities there are to get involved.
I recently had the opportunity to go to the European Conference on Digital Libraries. This three-day conference, held in Corfu this year, has been running for a number of years and is of interest to a range of different disciplines, from computer scientists to librarians and archivists. This year there were around 400 attendees, mainly from Europe but with a sprinkling of attendees from Asia and the Americas. With two keynote speakers, 30 papers and a range of posters and demonstrations the conference covered a good deal of ground and I’m going to put down a few of my impressions.
ECDL is quite scientific and academic in approach. Although the conference sub-title was ‘Digital Societies’ most of the papers approached the subject of Digital Libraries from quite a narrow technical view, investigating areas such as DL software performance, semantic search techniques or the impact of data-loss on images. A significant number of the papers were from Doctoral Students and many others outlined the latest state of research by groups of researchers.
‘The days are past when scholarly authority alone determines what is saved, learned and used’
The two keynote papers from Gary Marchionini and Peter Buneman provided two quite different perspectives on Digital Libraries. Marchionini’s paper flagged up how social networking is likely to impact on future Digital Libraries whilst Peter Buneman concentrated mainly on the need for Digital Libraries to reach out to the research communities and offer them a way of safeguarding their research materials.
Both saw the long-term storage cost of DLs as a significant issue and identified preservation as key issues for the future. Marchionini identified key differences between curated and community DLs, whilst Buneman concentrated on the importance of curation of research material as a key challenge.
Other conference papers
The conference also had a number of interesting sessions including a paper on the performance of DL systems – Fedora, DSpace and Greenstone – building large collections of material and stress-testing the systems to analyse their performance. Some of the papers presented covered quite complex technical solutions and models or methods, e.g. such as using visualisation or sound clues to aid search and retrieval.
Although the conference sub-title was ‘Digital Societies’ on reflection few papers other than the keynotes really got to grips with much beyond the technical issues surrounding Digital Libraries. The impact of Web 2.0 was touched upon but much of the content was still around building and managing monolithic database structures to contain vast repositories of data. Whilst that is a challenging activity in itself, the domain perhaps needs to move to a wider discussion of the impact of ‘Digital Societies’ It is possibly with this in mind that there are moves planned to change the name of the conference from 2011.