You are currently browsing the category archive for the ‘Library Management System’ category.
Interesting news this week that Elsevier have bought Plum Analytics from EBSCO. It seems to be part of a trend for the big content companies to expand their reach by acquiring other companies in associated fields. There’s a fascinating blog post from Roger Schonfeld from Ithaka ‘the strategic investments of content providers‘ that discusses what this might mean for the library sector and why these companies might be looking to diversify.
I’d probably reflect that library sector technology companies have a long history of mergers and acquisitions – a glance at Marshall Breeding’s chart on how companies have evolved over the years quickly shows how companies seem to change ownership or merge with great regularity, it doesn’t seem to be an especially stable marketplace. Yet libraries typically keep library management systems for quite long periods of time, ten years doesn’t seem unusual, and often upgrade with the same vendor, but maybe that slow turnover of systems might be related to the mergers and acquisitions as parent companies realise that their investment in a library systems supplier doesn’t provide quite the level of return they wanted? But recently there’s been a large number of systems procurements, particularly in academic libraries. A look at HElibtech’s procurements page shows a lot of recent activity.
With EBSCO’s involvement with the FOLIO open source product and Proquest’s acquisition of ExLibris, I wonder if that means Elsevier is looking for a suitable library systems or discovery product? Or does the acquisition of Plum Analytics mean that they are looking more at the world of citation systems, altmetrics and bibliometrics?
The news, reported in an article by Marshall Breeding in American Libraries, that EBSCO has decided to support a new open source library services platform is a fascinating development. To join with Kuali OLE but to develop what will essentially be a different open source product is a big development for the library technology sector. It’s particularly interesting that EBSCO has gone the route of providing financial support to an open source system, rather than buying a library systems company. The scope and timescales are ambitious, to have something ready for 2018.
Open source library management systems haven’t have the impact that systems like Moodle have had in the virtual learning environment sector and in some ways it is odd that academic libraries haven’t been willing to adopt such a system, given that universities do seem to have an appetite for open source software. Maybe open source library systems products haven’t been developed sufficiently to compete with commercial providers. Software as a Service (SaaS) is coming to be accepted now by corporate IT departments as a standard method of service provision, something that I think a couple of the commercial providers realised at quite an early stage, so it is good to see this initiative recognising that reality. It will be interesting to see how this develops
I was particularly interested in a term I came across in a blog post on innovation on the Nesta blog the other week. Innovation in the public sector: Is risk aversion a cause or a symptom? The blog post talks about Organisation Debt and Organisational Physics and is a really interesting take on why large organisations can struggle with innovation. It’s well worth a read. It starts with referencing the concept of ‘technical debt‘ described in the blog post as “… where quick fixes and shortcuts begin to accumulate over time and eventually, unless properly fixed, can damage operations.” It’s a term that tends to be related to software development but it started me thinking about how a concept of ‘technical debt’ might be relevant to the library world.
If we expand the technical debt concept to the library sector I’d suggest that you could look at at least three areas where that concept might have some resonance: library systems, library practices and maybe a third one around library ‘culture’ – potentially a combination of collections, services and something of the ‘tradition’ of what a library might be.
Our systems are a complex and complicated mix. Library management systems, E-resources management systems, discovery, openURL resolvers, link resolvers, PC booking systems etc etc It can be ten years or more between libraries changing their LMS and although, with Library Services Platforms, we are seeing some consolidation of systems into a single product, there is still a job to do of integrating legacy systems into the mix. For me the biggest area of ‘technical debt’ comes in our approach to linking and websites. Libraries typically spend significant effort in making links persistent, in coping with the transition from one web environment to the other by redirecting URLs. It’s not uncommon to have redirection processes in place to cope with direct links to content in previous websites and trying to connect users directly to replacement websites. Yet on the open web ‘link rot‘ is a simple fact of life. Trying to manage these legacy links is a significant technical debt that libraries carry I’d suggest.
I think you could point to several aspects of library practices that could fall under the category of technical debt but I’d suggest the primary one is in our library catalogue and cataloguing practices. Our practices change across the years but overall the quality of our older records are often lower than what we’d want to see. Yet we typically carry those records across from system to system. We try to improve them or clean them up, but frequently it’s hard to justify the resource being spent in ‘re-cataloguing’ or ‘retrospective cataloguing’. Newer approaches making use of collective knowledge bases and linking holdings to records has some impact on our ability to update our records, but the quality of some of the records in knowledge bases can sometimes also not be up to the level that libraries would like.
You could also describe some other aspects of the library world as showing the symptoms of technical debt. Our physical collections of print resources, increasingly unmanaged and often unused as constrained resources are directed to higher priorities, and more attention is spent on building online collections of ebooks for example. You even, potentially see a common thread with the whole concept of a ‘library’ – the popular view of a library as a place of books means that while libraries develop new services they often struggle to change their image to include the new world.
The end of 2015 and the start of 2016 seems to have delivered a number of interesting reports and presentations relevant to the library technology sphere. So Ken Chad’s latest paper ‘Rethinking the Library Services Platform‘ picks up on the lack of interoperability between library systems as does the new BiblioCommons report on the public library sector ‘Essential Digital Infrastructure for Public Libraries in England‘ commenting that “In retail, digital platforms with modular design have enabled quickly-evolving omnichannel user experiences. In libraries, however, the reliance on monolithic, locally-installed library IT has deadened innovation”.
As ‘Rethinking the Library Services Platform‘ notes, in many ways the term ‘platform’ doesn’t really match the reality of the current generation of library systems. They aren’t a platform in the same way as an operating system such as Windows or Android, they don’t operate in a way that third-parties can build applications to run on the platform. Yes, they offer integration to financial, student and reference management systems but essentially the systems are the traditional library management system reimagined for the cloud. Much of the changes are a consequence of what becomes possible with a cloud-based solution. So their features are shared knowledge bases, with multi-tenanted applications shared by many users as opposed to local databases and locally installed applications. The approach from the dwindling number of suppliers is to try to build as many products as possible to meet library needs. Sometimes that is by developing these products in-house (e.g. Leganto) and sometimes by the acquisition of companies with products that can be brought into the supplier’s eco-system. The acquisition model is exactly the same as that practised by both traditional and new technology companies as a way of building their reach. I’m starting to view the platform as much more in line with the approach that a company like Google will take with a broad range of products aiming to secure customer loyalty to their ecosystem rather than that of another company. So it may not be so surprising that technology innovation, which to my mind seems largely to be driven by vendors innovating to deliver to what they see as being library needs and shaped by what vendors think they see as an opportunity, isn’t delivering the sort of platform that is suggested. As Ken notes, Jisc’s LMS Change work discussed back in 2012 the sort of loosely-coupled, library systems component approach giving libraries to ability to integrate different elements to give the best fit to their needs from a range of options. But in my view options have very much narrowed since 2012/13.
The BiblioCommons report I find particularly interesting as it includes within it an assessment of how the format silos between print and electronic lead to a poor experience for users, in this case how ebook access simply doesn’t integrate into OPACs, with applications such as Overdrive being used that are separate to the OPAC, e.g. Buckinghamshire library services ebooks platform, and their library catalogue are typical. Few if any public libraries will have invested in the class of discovery systems now common in Higher Education (and essentially being proposed in this report), but even with discovery systems the integration of ebooks isn’t as seamless as we’d want, with users ending up in a variety of different platforms with their own interfaces and restrictions on what can be done with the ebook. In some ways though, the public library ebook offer, that does offer some integration with the consumer ebook world of Kindle ebooks, is better then the HE world of ebooks, even if the integration through discovery platforms in HE is better. What did intrigue me about the proposal from the BiblioCommons report is the plan to build some form of middleware system using ‘shared data standards and APIs’ and that leads to wondering whether that this can be part of the impetus for changing the way that library technology interoperates. The plan includes in section 10.3 the proposal to ‘deliver middleware, aggregation services and an initial complement of modular applications as a foundation for the ecosystem, to provide a viable pathway from the status quo towards open alternatives‘ so maybe this might start to make that sort of component-based platform and eco-system a reality.
Discovery is the challenge that Oxford’s ‘Resource Discovery @ The University of Oxford‘ report is tackling. The report by consultancy, Athenaeum 21, looks at discovery from the perspective of a world-leading research institution, with large collections of digital content and looks at connecting not just resources but researchers with visualisation tools of research networks, advanced search tools such as elastic search. The recommendations include activities described as ‘Mapping the landscape of things’, Mapping the landscape of people’, and ‘Supporting researchers established practices’. In some ways the problems being described echo the challenge faced in public libraries of finding better ways to connect users with content but on a different scale and includes other cultural sector institutions such as museums.
I also noticed a presentation from Keith Webster from Carnegie Mellon University ‘Leading the library of the future: W(h)ither technical services? This slidedeck takes you through a great summary of where academic libraries are now and the challenges they face with open access, pressure on library budgets and changes in scholarly practice. In a wide-ranging presention it covers the changes that led to the demise of chains like Borders and Tower records and sets the library into the context of changing models of media consumption. Of particular interest to me were the later slides about areas for development that, like the other reports, had improving discovery as part of the challenge. The slides clearly articulate the need for innovation as an essential element of work in libraries (measured for example as a % of time spent compared with routine activities) and also of the value of metrics around impact, something of particular interest in our current library data project.
Four different reports and across some different types of libraries and cultural institutions but all of which seem to me to be grappling with one issue – how do libraries reinvent themselves to maintain a role in the lives of their users when their traditional role is being erroded or when other challengers are out-competing with libraries – whether through improving discovery or by changing to stay relevant or by doing something different that will be valued by users.
Two interesting pieces of news came out yesterday with the sale of 3M library systems to Bibliotecha http://www.blibliotecha.com and then the news that Proquest were buying ExLibris. For an industry take on the latter news look at http://www.sr.ithaka.org/blog/what-are-the-larger-implications-of-proquests-acquisition-of-exlibris/
From the comments on twitter yesterday it was a big surprise to people, but it seems to make some sense. And it is a sector that has always gone through major shifts and consolidations. Library systems vendors always seem to change hands frequently. Have a look at Marshall Breeding’s graphic of the various LMS vendors over the years to see that change is pretty much a constant feature. http://librarytechnology.org/mergers/
There are some big crossovers in the product range, especially around discovery systems and the underlying knowledge bases. Building and maintaining those vast metadata indexes must be a significant undertaking and maybe we will see some consolidation. Primo and Summon fed from the same knowledge base in the future maybe?
Does it help with the conundrum of getting all the metadata in all the knowledge bases? Maybe it puts Proquest/ExLibris in a place where they have their own metadata to trade? But maybe it also opens up another competitive front.
It will be intersting to see what the medium term impact will be on plans and roadmaps. Will products start to merge, will there be less choice in the marketplace when libraries come round to chosing future systems?
So we’re slowly emerging from our recent LMS project and a bit of time to stop and reflect, partly at least to get project documentation for lessons learned and suchlike written up and the project closed down. We’ve moved from Voyager, SFX and EBSCO Discovery across to Alma and Primo. We went from a project kick off meeting towards the end of September 2014 to being live on Alma at the end of January 2015 and live on Primo at the end of April.
So time for a few reflections about some of the things to think about from this implementation. I’d worked out the other day that it has been the fifth LMS procurement/implementation process I’ve been involved with, and doing different roles and similar roles in each of them. For this one I started as part of the project team but ended leading the implementation stage.
Tidy up your data before you start your implementation. Particularly your bibliographic data but if you can other data too. You might not be able to do so if you are on an older system as you might not have the tools to sort out some of the issues. But the less rubbish you can take over to your nice new system the less sorting out you’ve got to do on the new system. And when you are testing your initial conversion too much rubbish makes it harder to see the ‘wood for the trees’, in other words work out what are problems that you need to fix by changing the way the data converts and what is just a consequence of poor data. With bibliographic data the game has changed, you are now trying to match your data with a massive bibliographic knowledge base.
It might be ideal to plan to go live with both LMS and Discovery at the same time but it’s hard to do. The two streams often need the same technical resources at the same time. Timescales are tight to get everything sorted in time. We decided that we needed to give users more notice of the changes to the Discovery system and make sure there was a changeover period by running in parallel.
You can move quickly. We took about four months from the startup meeting to being live on Alma but it means that you have a very compressed schedule. Suppliers have a well-rehearsed approach and project plan but it’s designed as a generic approach. There’s some flexibility but it’s deliberately a generic tried-and-tested approach. You have to be prepared to be flexible and push things through as quickly as possible. There isn’t much time for lots of consultation about decisions, which leads to…
As much as possible, get your decisions about changes in policies and new approaches made before you start. Or at least make sure that the people on the project team can get decisions made quickly (or make them themselves) and can identify from the large numbers of documents, guidance and spreadsheets to work through, what the key decisions you need to make will be.
Get the experts who know about each of the elements of your LMS/Discovery systems involved with the project team. There’s a balance between having too many and too few people on your project team but you need people who know about your policies, processes, practices and workflows, your metadata (and about metadata in general in a lot of detail to configure normalisation, FRBR’ization etc etc), who know about your technology and how to configure authentication and CSS. Your project team is vital to your chances of delivering.
Think about your workflows and document them. Reflect on them as you go through your training. LMS workflows have some flexibility but you still end up going through the workflows used by the system. Whatever workflows you start with you will no doubt end up changing or modifying them once you are live.
Training. Documentation is good. Training videos are useful and have the advantage of being able to be used whenever people have time. But you still need a blended approach, staff can’t watch hours of videos, and you need to give people training about how your policies and practices will be implemented in the new LMS. So be prepared to run face to face sessions for staff.
Regular software updates. Alma gets monthly software updates. Moving from a system that was relatively static we wondered about how disruptive it would be. Advice from other Libraries was that it wasn’t a problem. And it doesn’t seem to be. There are new updated user guides and help in the system and the changes happen over the weekend when we aren’t using the LMS.
It’s Software as a Service so it’s all different. I think we were used to Discovery being provided this way so that’s less of a change. The LMS was run previously by our corporate IT department so in some senses it’s just moved from one provider to another. We’ve a bit less control and flexibility to do stuff with it but OK, and on the other hand we’ve more powerful tools and APIs.
Analytics is good and a powerful tool but build up your knowledge and expertise to get the best out of it. We’ve reduced our reports and do a smaller number than we’d thought we need. Scheduled reports and widgets and dashboards are really useful and we’re pretty much scratching the surface of what we can do. Access to the community reports that others have done is pretty useful especially when you are starting.
Contacts with other users are really useful. Sessions talking to other customers, User Group meetings and the mailing lists have all been really valuable. An active user community is a vital asset for products not just the open source ones.
and finally, Reflection Twelve
We ran a separate strand to do some user research with students into what users wanted from library search. This was really invaluable as it gave us evidence to help in the procurement stage, but particularly it helped shape the decisions made about how to setup Primo. We’ve been able to say: this is what library users want and we have the evidence about it. And that has been really important in being able to challenge thinking based on what us librarians think users want (or what we think they should want).
So, twelve reflections about the last few months. Interesting, enlightening, enjoyable, frustrating at times, and tiring. But worthwhile, achievable and something that is allowing us to move away from a set of mainly legacy systems, not well-integrated, not so easy to manage to a set of systems that are better integrated, have better tools and perhaps as important have a better platform to build from.
Absence from blogging over the last few months feels very much like some form of winter hibernation but it’s mainly been a case of not having too much time for reflection in the middle of a library management system implementation. We haven’t quite finished yet but are a long way through the process and have been live on a cloud-based LMS for just over a month. So I can try to put together some early thoughts about the process and experience.
I worked on our project proposal around Christmas 2012 for a project we termed Library Futures that included a library management system and discovery procurement and implementation. But that wasn’t really the start of the process. We’d spent a bit of time looking at what our needs were and working with some consultants to get a better idea of the best options for us. I’d also had some involvement with the Jisc LMS Change project, all of which helped us to understand what was out there and what our options were. So that takes us back into 2011 and maybe a bit earlier. And a lot of the thinking was about the best timing for changing systems as the LMS market was in the early stages of the ‘Software as a Service’ reinvention and products were (and maybe still are) at an early stage. So by my rough calculation that’s a couple of years in the planning, followed by a year to secure approval, followed by an eighteen month or so procurement and implementation stage. It takes a long time and a lot of effort, and the final stage of implementation isn’t the most time-consuming part.
In the procurement stage we went the full EU tender route and for our requirements catalogue (specification) made extensive use of the LibTechRFP exemplars http://libtechrfp.wikispaces.com/ not just the UK Core Specification but also the examples for the Library Services Platform, Electronic Resources Management and Search and Discovery. And we also needed to add in our own requirements and cut out features aimed more at a traditional ‘physical’ university. It ended up with quite a large and detailed catalogue of requirements. But I’ve always felt that to be important for library systems as the detail is vital (and not just because the successful tender response forms part of our contract). Library management systems have to cover a lot of functions and it’s important to get the detail to understand what using that system will mean for you in practice. Interesting to me though was to find some of our search requirements already getting reused in another systems requirement document in the institution.
I’m always on the lookout for useful new tools for projects, website and so on. So it was good to see a tool like Basecamp being used by the supplier we chose. It isn’t a free tool (other than for an initial period) but it worked well as a way of sharing files and having the sort of discussions that you need when going through the implementation process. I felt the to do list feature worked a bit less well. As a communication tool it worked neatly without being too formal or time-consuming. We’ve ended up using it on two different projects with two entirely different suppliers so it is obviously doing something right.
Final thoughts for the moment are about the range of skills needed in a team putting in an LMS. Some obvious ones such as systems and IT knowledge, procurement and project management, and for libraries obvious areas such as knowledge of the library acquisitions, cataloguing/metadata and circulation processes. But also ones that can get overlooked around training expertise, administrative support, decision making, business analysis and data quality. And above all some determination and team spirit to get through an immense to do list.
For a little while I’ve been trying to find some ways of characterising the different generations or ages of library ‘search’ systems. By library ‘search’ I’ve been thinking in terms of tools to find resources in libraries (search as a locating tool) as well as the more recent trend (athough online databases have been with us for a while) of search as a tool to find information.
I wanted something that I could use as a comparison that picked up on some of the features of library search but compared them with some other domain that was reasonably well known. Then I was listening to the radio the other day and there was some mention that it was the anniversary of the 45rpm single, and that made me wonder whether I could compare the generations of library search against the changes in formats in the music industry.
My attempt at trying to map them across is illustrated here. There are some connections – both discovery systems and the likes of spotify streaming music systems are both cloud hosted. Early printed music scores and the printed library catalogue such as the original British Museum library catalogue. I’m not so sure about some of the stages in between though, certainly the direction for both has been to make library/music content more accessible. But it seemed like a worthwhile thing to think about and try it out. Maybe it works, maybe not.
It was Lorcan Dempsey who I believe coined the term, ‘Full library discovery’ in a blog post last year. As a stage beyond ‘full collection discovery’, ‘full library discovery’ added in results drawn from LibGuides or library websites, alongside resource material from collections. So for example a search for psychology might include psychology resources, as well as help materials for those pyschology resources and contact details about the subject librarian that covers psychology. Stanford and Michigan are two examples of that approach, combining lists of relevant resources with website results.
Princeton’s new All search feature offers a similar approach, discussed in detail on their FAQ. This combines results from their Books+, Articles+, Databases, Library Website and Library Guides into a ‘bento box’ style results display. Princeton’s approach is similar to the search from North Carolina State University who I think were about the first to come up with this style.
Although in most of these cases I suspect that the underlying systems are quite different the approach is very similar. It has the advantage of being a ‘loosely-coupled’ approach where your search results page is drawn together in a ‘federated’ search method by pushing your search terms to several different systems, making use of APIs and then displaying the results in a dashboard-style layout. It has the advantage that changes to any of the underlying systems can be accommodated relatively easily, yet the display to your users stays consistent.
For me the disadvantages for this are in the lack of any overriding relevancy ranking for the material and that it perpetuates the ‘silo’ing’ of content to an extent (Books, Articles, Databases etc) which is driven largely by the underlying silos of systems that we rely on to manage that content. I’ve never been entirely convinced that users understand the distinction about what a ‘database’ might be. But the approach is probably as good as we can get until we get to truly unified resource management and more control over relevancy ranking.
Going beyond ‘full library discovery’
But ‘full library discovery’ is still very much a ‘passive’ search tool, and by that I mean that it isn’t personalised or ‘active’. At some stage to use those resources a student will be logging in to that system and that opens up an important question for me. Once you know who the user is, ‘how far should you go to provide a personalised search experience?’. You know who they are, so you could provide recommendations based on what other students studying their course have looked at (or borrowed), you might even stray into ‘learning analytics’ territory and know what the resources were that the highest achieving students looked at.
You might know what resources are on the reading list for the course that student is studying – so do you search those resources first and offer those up as they might be most relevant? You might even know what stage a student has got to in their studies and know what assignment they have to do, and what resources they need to be looking at. Do you ‘push’ those to a student?
How far do you go in assembling a profile of what might be ‘recommended’ for a course, module or assignment, what other students on the cohort might be looking at, or looked at the last time this course ran? Do you look at students previous search behaviour? How much of this might you do to build and then search some form of ‘knowledge base’ with the aim of surfacing material that is likely to be of most relevance to a student. Search for psychology on NCSU’s Search All search box gives you the top three articles out of 2,543,911 articles in Summon, and likely behaviour is not to look much beyond the first page of results. So should we be making sure that they are likely to be the most relevant ones?
But, then there’s serendipity, there’s finding the different things that you haven’t looked for before, or read before, because they are new or different. One of the issues with recommendations is the tendancy for them to be circular, ‘What gets recommended gets read’ to corrupt the performance indicator mantra. So how far do you go? ‘Mind reading search’ anyone?
I’ve definitely blogged less (24 posts in 2013 compared with 37 in 2012 and 50 in 2011), [mind you the ‘death of blogging’ has been announced, and here and there seem to be fewer library bloggers than in the past – so maybe blogging less is just reflecting a general trend]. Comments about blogging are suggesting that tumblr, twitter or snapchat are maybe taking people’s attention (both bloggers and readers) away from blogs. But I’m not ‘publishing’ through other channels particularly, other than occasional tweets, so that isn’t the reason for me to blog less. There has been a lot going on but that’s probably not greatly different from previous years. I think I’ve probably been to less conferences and seminars, particularly internal seminars, so that has been one area where I’ve not had as much to blog about.
To blog about something or not to blog about it
I’ve been more conscious of not blogging about some things that in previous years I probably would have blogged about. I don’t think I blogged about the Future of Technology in Education conference this year, although I have done in the past. Not particularly because it wasn’t interesting because it was, but perhaps a sense of I’ve blogged about it before and might just be repeating myself. With the exception of posts about website search and activity data I’ve not blogged so much about some of the work that I’ve been doing. So I’ve blogged very little about the digital library work although it (and the STELLAR project) were a big part of some of the interesting stuff that has been going on.
Thinking about the year ahead
I’ve never been someone that sets out predictions or new year resolutions. I’ve never been convinced that you can actually predict (and plan) too far ahead in detail without too many variables fundamentally changing those plans. There’s a quote attributed to various people along the lines that ‘no plan survives contact with the enemy’ and I’d agree with that sentiment. However much we plan we are always working with an imperfect view of the world. Circumstances change and priorities vary and you have to adapt to that. Thinking back to FOTE 2013 it was certainly interesting to hear BT’s futureologist Nicola Millard describe her main interest as being the near future and of being more a ‘soon-ologist’ than a futureologist.
What interests (intrigues perhaps) me more is less around planning but more around ‘shaping’ a future, so more change management than project management I suppose. But I think it is more than that, how do those people who carve out a new ‘reality’ go about making that change happen. Maybe it is about realising a ‘vision’ but assembling a ‘vision’ is very much the easy part of the process. Getting buy-in to a vision does seem to be something that we struggle with in a library setting.
On with 2014
Change management is high on the list for this year. We’ve done a certain amount of the ‘visioning’ to get buy-in to funding a change project. So this year we’ve work to do to procure a complete suite of new library systems (the first time I think here for 12 years or so), in a project called ‘Library Futures’ that also includes some research into student needs from library search and the construction of a ‘digital skills passport’. I’ve also got continuing work on digital libraries/archives as we move that work from development to live, alongside work with activity data, our library website and particularly work with integrating library stuff much more into a better student experience. So hopefully some interesting things to blog about. And hopefully a few new pictures to brighten up the blog (starting with a nice flower picture from Craster in the summer).