Two interesting pieces of news came out yesterday with the sale of 3M library systems to Bibliotecha and then the news that Proquest were buying ExLibris.  For an industry take on the latter news look at

From the comments on twitter yesterday it was a big surprise to people, but it seems to make some sense.  And it is a sector that has always gone through major shifts and consolidations.  Library systems vendors always seem to change hands frequently.  Have a look at Marshall Breeding’s graphic of the various LMS vendors over the years to see that change is pretty much a constant feature.

There are some big crossovers in the product range, especially around discovery systems and the underlying knowledge bases.  Building and maintaining those vast metadata indexes must be a significant undertaking and maybe we will see some consolidation.  Primo and Summon fed from the same knowledge base in the future maybe?

Does it help with the conundrum of getting all the metadata in all the knowledge bases?  Maybe it puts Proquest/ExLibris in a place where they have their own metadata to trade?  But maybe it also opens up another competitive front.

It will be intersting to see what the medium term impact will be on plans and roadmaps.  Will products start to merge, will there be less choice in the marketplace when libraries come round to chosing future systems?



A fascinating couple of articles over the last few days around what is happening with ebook sales (from the US).  A couple of articles from the Stratechery site (via @lorcanD and @aarontay) Disconfirming ebooks and Are ebooks declining, or just the publishers.  Firstly referring to an article in the NY Times reporting on ebook sales plateau’ing, but then a more detailed piece of work from Author Earnings analysing more data.  The latter draws the conclusion that it was less a case of ebook sales plateauing but more a case that the market share from the big publishers was declining (and postulating that price increases might play a part).  Overall the research seems to show growth in independent and self-publishing but what looks like fairly low levels of growth overall.  The figures mostly seem to be about market share rather than hard and fast sales per se.  But interesting nonetheless to see how market share is moving away from ‘traditional’ print publishers.

The Stratechery articles are particularly interesting around the way that ebooks fit with the disruptive model of new digital innovation challenging traditional industries, what is termed here ‘Aggregation theory‘  [As an aside it’s interesting from the Author Earnings article to note that many of the new ebooks from independent or self-publishers don’t have ISBNs.  What does that imply for the longer term tracking of this type of material?    Already I suspect that they are hard to acquire for libraries and just don’t get surfaced in the library acquisitions sphere. Does it mean that these titles are likely to become much more ephemeral?]

The conclusion in the second Stratechery article I find particularly interesting, that essentially ebooks aren’t revolutionising the publishing industry in terms of the form they take.  They are simply a digital form of the printed item.  Often they add little extra by being in digital form, maybe they are easier to acquire and store, but often in price terms they aren’t much cheaper than the printed version.  Amazon Kindle does offer some extra features but I’ve never been sure how much they are taken up by readers. Unlike music you aren’t seeing books being disaggregated into component parts or chapters (although it’s a bit ironic considering that some of Charles Dickens’ early works, such as The Pickwick Papers, were published in installments, as part works).  But I’d contend that the album in music isn’t quite the same as a novel for example.  Music albums seem like convenient packaging/price? of a collection of music tracks (possibly with the exception of ‘concept’ albums?) for a physical format, whereas most readers wouldn’t want to buy their novels in parts.  There’s probably more of a correlation between albums/tracks and journals/articles – in that tracks/articles lend themselves in a digital world to being the lowest level and a consumable package of material.

But I can’t help but wonder why audiobooks don’t seem to have disrupted the industry either.  Audible are offering audiobooks in a similar way to Netflix but aren’t changing the book industry in the way the TV and movie industry are being changed.  So that implies to me that there’s something beyond the current ‘book’ offering (or that the ‘book’ actually is a much more consumable, durable package of content than other media).   Does a digital ‘book’ have to be something quite different that draws on the advantage of being digital – linking to or incoporating maps, images, videos or sound, or some other form of social interaction that could never be incorporated in a physical form?   Or are disaggregated books essentially what a blog is (modularization as suggested on stratechery)?  Is the hybrid digital book the game-changer?  [there are already examples of extra material being published online to support novels – see Mark Watson’s Hotel Alpha stories building on his novel Hotel Alpha, for example.]   You could liken online retailers as disrupting the book sales industry as a first step but we’re perhaps only in the early stages of seeing how Amazon will ultimately disrupt the publishing industry.  Perhaps the data from Author Earnings report points to the signs of the changes in ebook publishers.

data.path Ryoji.Ikeda - 3 by r2hox

data.path Ryoji.Ikeda – 3 by r2hox

One of the pieces of work we’re just starting off in the team this year is to do some in-depth work on library data.  In the past we’ve looked at activity data and how it can be used for personalised services (e.g. to build recommendations in the RISE project or more recently to support the OpenTree system), but in the last year we’ve been turning our attention to what the data can start to tell us about library use.

There have been a couple of activities that we’ve undertaken so far.  We’ve provided some data to an institutional Learning Analytics project on the breakdown of library use of online resources for a dozen or so target modules.  We’ve been able to take data from the EZproxy logfiles, and show the breakdown by student ID, by week and by resource over the nine-month life of the different modules.  That has put library data alongside other data such as use of the Virtual Learning Environment and allowed module teams to  look at how library use might relate to the other data.

Pattern of week by week library use of eresources - first level science course

Pattern of week by week library use of eresources – first level science course

A colleague has also been able to make use of some data combining library use and satisfaction survey data for a small number of modules, to shed a little light on whether satisfied students were making more use of the library than unsatisfied ones (obviously not a causal relationship – but initial indications seem to be that for some modules there does seem to be a pattern there).

Library Analytics roadmap
But these have been really early exploratory steps, so during last year we started to plan out a Library Analytics Roadmap to scope out the range of work we need to do.  This covers not just data analysis, but also some infrastructural developments to help with improving access to data and some effort to build skills in the library.  It is backed up with engagement with our institutional learning analytics projects and some work to articulate a strategy around library analytics.  The idea being that the roadmap activities will help us change how we approach data, so we have the necessary skills and processes to be able to provide evidence of how library use relates to vital aspects such as student retention and achievement.

Library data project
We’re working on a definition of Library analytics as being about:

Using data about student engagement with library services and content to help institutions and students understand and improve library services to learners

Part of the roadmap activity this year is to start to carry out a more systematic investigation into library data, to match it against student achievement and retention data.  The aim is to build an evidence base of case studies, based on quantitative data and some qualitative work we hope to do.  Ideally we’d like to be able to follow the paths mapped out by the likes of Minnesota, Wollongong and Huddersfield in their various projects and demonstrate that there is a correlation between library use, student success and retention.

Challenges to address
We know that we’re going to need more data analysis skills, and some expertise from a statistician.  We also have some challenges because of the nature of our institution.  We won’t have library management system book loans, or details of visits to the library, we will mainly have to concentrate on use of online resources.  So in some ways that simplifies things.  But our model of study also throws up some challenges.  With a traditional campus institution students study a degree over three or four years.  There is a cohort of students that follow through year 1, 2, 3 etc and at the end of that period they do their exams and get their degree classification.  So it is relatively straight-forward to see retention as being about students that return in year 2 and year 3, or don’t drop-out during the year, and to see success measured as their final degree classification.  But with part-time distance learning, where although students sign up to a qualification, they still follow a pattern of modules and many will take longer than six years to complete, often with one of more ‘breaks’ in study, following a cohort across modules might be difficult.  So we might have to concentrate on analysis at the ‘module’ level… but then that raises another question for us.  Our students could be studying more than one module at a time so how do you easily know whether their library use relates to module A or module B?  Lots of things to think about as we get into the detail.

OU Digital Archive home pageThe digital archive site that we’ve been working away on for a while now is finally public.  It is being given a very low-key soft launch to give time for more testing and checking to make sure that the features work OK for users, but as it has now been tweeted about, is linked from our main library website and findable on Google, then I can finally write a short piece about it.

The site has gone live with a mix of images, some videos about the university and a small collection of video clips from the first science module in the 1970s.  Accompanying the images and videos are a couple of sub-sites we’ve called Exhbitions. To start with there are two, one covering the teaching of Shakespeare and the other giving a potted history of the university.  The exhibitions are designed to give a bit more context around some of the material in the collection.

The small collection of 160 historical images from the history of the university include people involved in the development of the university or significant events such as the first graduation ceremony, as well as a selection of images about the construction of the campus.   The latter is slightly odd maybe for a distance learning institution, with a campus that most students may never see, but maybe that makes the changes to the physical enviroment of interest to students and the general viewer nonetheless.

The selection of videos include a collection of thirty programmes about the university mostly from the 1970s and 1980s and mainly from a magazine-style series called Open Forum, giving students a bit of an insight into the life of the university.  It includes sections from various University officials, but also student experiences, Summer schools and the like.  Some of the videos cover events such as royal visits and material about the history of the university.

Less obvious to the casual browser is the inclusion of a large collection of metadata about university courses.  This metadata profile forms a skeleton or scaffolding that is used to hang the bits of digitised course materials together and relate them to their parent course/module.  So it gives a way of displaying the Module presentation datesdifferent types of material included in a module together as well as giving information about the module, its subjects and when it ran.  At the moment there are only a few digitised samples hanging on the underlying bare bones.

To find the metadata go to the View All tab, make sure the ‘Available online’  button isn’t selected and choose ‘Module overview’ from Content Type, and it’s possible to browse through some details of the university’s old modules, seeing some information about the module, when they were run.  You can also follow through to the linked data repository at e.g. Underpinning this aspect of the site is a semantic web RDF triplestore.

Public and staff sites
One of the challenges for the digital archive is that it is essentially two different sites under the skin.  A staff version of the site has been available internally for over a year and lets staff login to see a broader range of material, particularly from old university course materials.  So staff can access some sound recordings as well as a small number of digitised books, and access a larger collection of videos, although at this stage it’s still a fairly small proportion of the overall archive.  But more will be added over time as well as hopefully some of the several hundred module websites that have been archived over the past three years.

Intellectual Property
Unlike many digital archives all of the content is relatively recent, i.e. less than fifty years old.  And that gives a different set of challenges as there is a lot of content that would need to have Intellectual Property rights cleared before it could be made openly available.  So there are a small number of clips but at the moment limited amounts of course materials that have been able to be made open.  So one of the challenges will be to find ways to fund making more material open, both in terms of the effort needed to digitise and check material and the cost of payments to any rights holders.

The digital archive can be found at

We’ve been running Primo as our new Library Search discovery system since the end of April so it’s been ‘live’ for just over four months.  Although it’s been a quieter time of year over the summer I thought it would be interesting to start to see what the analytics are saying about how Library Search is being used.

Internal click-throughs
Some analytics are provided by the supplier in the form of click-through statistics and there are some interesting figures that come out of those.  The majority of searches are ‘Basic searches’, some 85%.  Only about 11% of searches use Advanced search.  Advanced search isn’t offered against the Library Search box embedded into the home page of the library website but is offered next to the search box on the results page and on any subsequent search.  It’s probably slightly less than I might have expected as it seemed to be fairly frequently mentioned as being used regularly on our previous search tool.

About 17% of searches lead to users refining their search using the facets.  Refining the search using facets is something we are encouraging users to do, so that’s a figure we might want to see going up.  Interestingly only 13% navigated to the next page in a set of search results using the forward arrow, suggesting that users overwhelmingly expect to see what they want on the first page of results. (I’ve a slight suspicion about this figure as the interface presents links to pages 2-5 as well as the arrow – which goes to pages 6 onwards –  and I wonder if pages 2-5 are taken into account in the click-through figure).

Very few searches (0.5% of searches) led users to use the bX recommendations, despite this being in a prominent place on the page.  The ‘Did you mean’ prompt also seemed to have been used in 1% of searches.  The bookshelf feature ‘add to e-shelf’is used in about 2% of searches.

Web analytics
Browsers used pie chartLooking at web analytics shows that Chrome is the most popular browser, followed by Internet Exploer, Safari, and Firefox.

75% of traffic comes from Windows computers with 15% from Macintoshes.  There’s a similar amount of traffic from tablets to what we see on our main library website, with tablet traffic running at about 6.6% but mobile traffic is a bit lower at just under 4%.

Overall impressions
Devices using library search seem pretty much in line with traffic to other library websites.  There’s less mobile phone use but possibly that is because Primo isn’t particularly well-optimised for mobile devices and also maybe something to test with users whether they are all that interested in searching library discovery systems through mobile phones.

I’m not so surprised that basic search is used much more than advanced search.  It matches the expectations from the student research of a ‘google-like’ simple search box.  The data seems to suggest that users expect to find results that are relevant on page one and not go much further, something again to test with users ‘Are they getting what they want’.  Perhaps I’m not too surprised that the ‘recommender’ suggestions are not being used but it implies that having them at the top of the page might be taking up important space that could be used for something more useful to users.  Some interesting pointers about things to follow up in research and testing with users.


browzine magazine shelfWe’ve started using BrowZine ( as a different way of offering access to online journals.  Up until recently there were iOS and Android app versions but they have now been joined by a desktop version.

BrowZine’s interesting as it tries to replicate the experience of browsing recent copies of journals in a physical library.  It links into the library authentication system and is fed with a list of library holdings.  There are also some open access materials in the system.

You can browse for journals by subject or search for specific journals and then view the table of contents for each journal issue and link straight through to the full-text of the articles in the journals.  In the app versions you can add journal titles to your personal bookshelf (a feature promised for the desktop version later this year) and also see when new articles have been added to your chosen journals (shown with the standard red circle against the journal on the iOS version).

A useful tool if there are a selection of journals that you need to keep up to date with.  Certainly the ease with which you can connect with the full-text contrasts markedly with some of the hoops that we seem to expect users to cope with in some other library systems.

OpenTree sample badgeOne of the interesting features of our new library game OpenTree for me is that it is possible to engage with it in a few different ways.  Although at one level it’s about a game, with points and badges for interacting with the game and with library content, resources and webpages.  It’s social so you can connect with other people and review and share resources.

But, as a user you can choose the extent that you want to share.  So you can choose to share your activity with all users in OpenTree, or restrict it so only your friends can see your activity, or choose to keep your activity private.  You can also choose whether or not things you highlight are made public.

So you’d wonder what value you’d get in using it if you make your activity entirely private.  But you can use it as a way of tracking which library resources you are using.  And you can organise them by tagging them and writing notes about them so you’ve got a record of the resources you used for a particular assignment.  You might want to keep your activity private if you’re writing a paper and don’t want to share your sources or if you aren’t so keen on social aspects.

If you share your activities with friends and maybe connect with people studying the same module as you, then you could see some value in sharing useful resources with fellow students you might not meet otherwise.  In a distance-learning institution with potentially hundreds of students studying your module, students might meet a few students in local tutorials or on module forums but might never connect with most people following the same pathway as themselves.

And some people will be happy to share, will want to get engaged with all the social aspects and the gaming aspects of OpenTree.  It will be really interesting to see how users get to grips with OpenTree and what they make of it and to hear how people are using it.

It will particularly be interesting to see how our users engagement with it might differ from versions at bricks-and-mortar Universities at Huddersfield, Glasgow and Manchester.  OpenTree’s focus is online and digital so doesn’t include loans and library visits, and our users are often older, studying part-time and not campus-based.

Subject leaderboard screenshotIn early feedback, we’re already seeing a sense that some of the game aspects, such as the Subject leaderboard is of less interest than expected.  Maybe that reflects students being focused around outcomes much more, although research seems to suggest (Tomlinson 2014 ‘Exploring the impact of policy changes on students’ attitudes and approaches to learning in higher education’ HEA) that this isn’t just a factor for part-time and distance-learning students as a result of increased university fees and student loans.  It might also be that because we haven’t gone for an individual leaderboard that there’s less personal investment, or just that users aren’t so sure what it represents.



OpenTree badge examplesOne of the projects that we’ve been working on as part of our Library Futures programme has been a product called OpenTree.  OpenTree is based on the Librarygame software from a small development team at ‘Running in the Halls’.  Librarygame adds gaming and social aspects to student engagement with library services.

Librarygame was developed originally as Lemontree for Huddersfield University ( and then updated and adopted as librarytree and BookedIn for Glasgow and Manchester Universities respectively ( and

Being originally based around engagement with physcial libraries and taking data from library loans from the library management system, or from physical library visits, via building access logs, the basic game model had to change a bit for a distance-learning University where students don’t visit the University library or borrow books.

OuOpenTree screenshotr main engagement with students is their use of online library resources and library websites.  Fortunately most of our resource access goes through EZProxy so we were able to find a way of allowing users who sign-up to the game to get points for the resources they access.  We’ve also been able to add javascript tracking onto a couple of our websites to give students points for accessing those sites.

OpenTree gives users points for accessing resources and points build up into levels in the game.  Activities such as making friends, reviewing, tagging and sharing resources also get you badges in the game.  We’ve also added in a Challenges section to highlight activities to encourage users to try out different things, trying Being Digital, for example.

Because it lists library resources you’ve been accessing I’ve already been finding it useful as a way of organising and remembering library resources I’ve been using, so we’re hopeful that students will also find it useful and really get into the social aspects.

OpenTree launches to students in the autumn but is up-and-running in beta now.  A video introducing OpenTree is on YouTube at:

We’re really looking forward to seeing how students get on with OpenTree and already have a few thoughts about enhancements and developments, and no doubt other ideas will come up once more people start using it.






I noticed this morning a blog post on the Wellcome Library plans to build a cloud-based digital library platform, ‘Moving the Wellcome Library to the cloud‘  It’s a fascinating piece of news.  The Wellcome Library’s amibition and scale, talking about having over 30m digitised pages by 2018 and about building a platform that could potentially be made use of by others is interesting to see.

As we’ve seen with Library Management Systems, cloud-based systems are becoming commonplace but where digital libraries seem to be concerned, most of them are operated as locally hosted systems.   The article also talks about the use of IIIF (International Image Interoperability Framework)  which is something for digital libraries to take notice of.  It also flags some developments to Wellcome’s media player to create a new Universal Viewer to handle video, audio and other material.  Given how tricky we’ve found getting accesible media players it will be interesting to keep an eye on these developments.

Mention of APIs, commodity services and APIs are also in scope.  Something definitely to watch for the future.

So we’re slowly emerging from our recent LMS project and a bit of time to stop and reflect, partly at least to get project documentation for lessons learned and suchlike written up and the project closed down.  We’ve moved from Voyager, SFX and EBSCO Discovery across to Alma and Primo.  We went from a project kick off meeting towards the end of September 2014 to being live on Alma at the end of January 2015 and live on Primo at the end of April.

So time for a few reflections about some of the things to think about from this implementation.  I’d worked out the other day that it has been the fifth LMS procurement/implementation process I’ve been involved with, and doing different roles and similar roles in each of them.  For this one I started as part of the project team but ended leading the implementation stage.

Reflection One
Tidy up your data before you start your implementation.  Particularly your bibliographic data but if you can other data too.  You might not be able to do so if you are on an older system as you might not have the tools to sort out some of the issues.  But the less rubbish you can take over to your nice new system the less sorting out you’ve got to do on the new system.  And when you are testing your initial conversion too much rubbish makes it harder to see the ‘wood for the trees’, in other words work out what are problems that you need to fix by changing the way the data converts and what is just a consequence of poor data. With bibliographic data the game has changed, you are now trying to match your data with a massive bibliographic knowledge base.

Reflection Two
It might be ideal to plan to go live with both LMS and Discovery at the same time but it’s hard to do.  The two streams often need the same technical resources at the same time.  Timescales are tight to get everything sorted in time.  We decided that we needed to give users more notice of the changes to the Discovery system and make sure there was a changeover period by running in parallel.

Reflection Three
You can move quickly.  We took about four months from the startup meeting to being live on Alma but it means that you have a very compressed schedule.  Suppliers have a well-rehearsed approach and project plan but it’s designed as a generic approach.  There’s some flexibility but it’s deliberately a generic tried-and-tested approach.  You have to be prepared to be flexible and push things through as quickly as possible.  There isn’t much time for lots of consultation about decisions, which leads to…

Reflection Four
As much as possible, get your decisions about changes in policies and new approaches made before you start.  Or at least make sure that the people on the project team can get decisions made quickly (or make them themselves) and can identify from the large numbers of documents, guidance and spreadsheets to work through, what the key decisions you need to make will be.

Reflection Five
Get the experts who know about each of the elements of your LMS/Discovery systems involved with the project team.  There’s a balance between having too many and too few people on your project team but you need people who know about your policies, processes, practices and workflows, your metadata (and about metadata in general in a lot of detail to configure normalisation, FRBR’ization etc etc), who know about your technology and how to configure authentication and CSS.  Your project team is vital to your chances of delivering.

Reflection Six
Think about your workflows and document them.  Reflect on them as you go through your training.  LMS workflows have some flexibility but you still end up going through the workflows used by the system.  Whatever workflows you start with you will no doubt end up changing or modifying them once you are live.

Reflection Seven
Training.  Documentation is good.  Training videos are useful and have the advantage of being able to be used whenever people have time.  But you still need a blended approach, staff can’t watch hours of videos, and you need to give people training about how your policies and practices will be implemented in the new LMS.  So be prepared to run face to face sessions for staff.

Reflection Eight
Regular software updates.  Alma gets monthly software updates.  Moving from a system that was relatively static we wondered about how disruptive it would be.  Advice from other Libraries was that it wasn’t a problem.  And it doesn’t seem to be.  There are new updated user guides and help in the system and the changes happen over the weekend when we aren’t using the LMS.

Reflection Nine
It’s Software as a Service so it’s all different.  I think we were used to Discovery being provided this way so that’s less of a change.  The LMS was run previously by our corporate IT department so in some senses it’s just moved from one provider to another.  We’ve a bit less control and flexibility to do stuff with it but OK, and on the other hand we’ve more powerful tools and APIs.

Refelection Ten
Analytics is good and a powerful tool but build up your knowledge and expertise to get the best out of it.  We’ve reduced our reports and do a smaller number than we’d thought we need.  Scheduled reports and widgets and dashboards are really useful and we’re pretty much scratching the surface of what we can do.  Access to the community reports that others have done is pretty useful especially when you are starting.

Refelection Eleven
Contacts with other users are really useful.  Sessions talking to other customers, User Group meetings and the mailing lists have all been really valuable.  An active user community is a vital asset for products not just the open source ones.

and finally, Reflection Twelve
We ran a separate strand to do some user research with students into what users wanted from library search.  This was really invaluable as it gave us evidence to help in the procurement stage, but particularly it helped shape the decisions made about how to setup Primo.  We’ve been able to say: this is what library users want and we have the evidence about it.  And that has been really important in being able to challenge thinking based on what us librarians think users want (or what we think they should want).

So, twelve reflections about the last few months.  Interesting, enlightening, enjoyable, frustrating at times, and tiring.  But worthwhile, achievable and something that is allowing us to move away from a set of mainly legacy systems, not well-integrated, not so easy to manage to a set of systems that are better integrated, have better tools and perhaps as important have a better platform to build from.

Twitter posts



October 2015
« Sep    

Creative Commons License


Get every new post delivered to your Inbox.

Join 46 other followers