You are currently browsing the tag archive for the ‘Library Management Systems’ tag.
Two interesting pieces of news came out yesterday with the sale of 3M library systems to Bibliotecha http://www.blibliotecha.com and then the news that Proquest were buying ExLibris. For an industry take on the latter news look at http://www.sr.ithaka.org/blog/what-are-the-larger-implications-of-proquests-acquisition-of-exlibris/
From the comments on twitter yesterday it was a big surprise to people, but it seems to make some sense. And it is a sector that has always gone through major shifts and consolidations. Library systems vendors always seem to change hands frequently. Have a look at Marshall Breeding’s graphic of the various LMS vendors over the years to see that change is pretty much a constant feature. http://librarytechnology.org/mergers/
There are some big crossovers in the product range, especially around discovery systems and the underlying knowledge bases. Building and maintaining those vast metadata indexes must be a significant undertaking and maybe we will see some consolidation. Primo and Summon fed from the same knowledge base in the future maybe?
Does it help with the conundrum of getting all the metadata in all the knowledge bases? Maybe it puts Proquest/ExLibris in a place where they have their own metadata to trade? But maybe it also opens up another competitive front.
It will be intersting to see what the medium term impact will be on plans and roadmaps. Will products start to merge, will there be less choice in the marketplace when libraries come round to chosing future systems?
I’ve spent the last couple of days at a fascinating JISC/SCONUL workshop, ‘The Squeezed Middle? Exploring the future of Library Systems. ‘The Squeezed Middle’ referring to the concentration of attention in recent months on electronic resource management (in the SCONUL Shared Services and Knowledge Base + activities) and Discovery Systems (such as Summon, EDS and Primo), that has rather taken the focus away from other library systems, notably the Library Management System. In part, it was explained, this was deliberate, as developments in open source LMS (such as Kuali OLE and Evergreen) and developments of new systems such as Alma from ExLibris that look at unifying print, electronic and digital resource management, have been (and still are) in development and there needs to be some maturity. But we are now starting to see these developments moving on and open source starting to be adopted (by Staffordshire University library for example). So the time is right to start to focus on these systems afresh.
Punctuating the workshop were a series of deliberately provocative and challenging ‘visions’ of the future library of 2020 and a video from Lorcan Dempsey. [Paul Walk has blogged his here.] Against this background we looked at several topics such as collections, space, systems and expertise around the library systems domain. Overnight we looked at a series of sixty-odd themes and activities and followed that up today looking at prioritisation and value of those activities to try to understand what might be some priority tasks.
A few things came to mind for me during and after the workshop. Firstly, there maybe isn’t a clear definition of the boundaries of this space and really no common view of what aspects of print/electronic/digital processes and collections we are scoping and addressing. It also struck me that a lot of the issues, concerns and priorities were about data rather than systems or processes. So they included topics such as licenses for ebooks, open bibliographic metadata, passing data to institutional finance systems and activity data for example. I do find it particularly interesting that despite the effort that goes into the data that libraries consume, there are some really big tasks to address to flow data around our systems without duplication or unnecessary activity. (Incidentally, there’s a concept used in Customer Care, termed ‘Unnecessary contact’ and there used to be a National Indicator NI14 where local government had to reduce unnecessary contact. In other words reduce the instances where customers have to contact you for further clarification. So you aim to deal with the issue at first point of contact. I start to wonder whether there’s a similar concept that we might apply to libraries when we carry out extra processing and cataloguing instead of taking ‘shelf-ready’ books and downloaded bibliographic records – unneccessary refinements maybe?)
I also found it interesting how the topic of reading list solutions came up as a hot issue. It’s a particular interest to me given involvement in the OU’s TELSTAR reference management project. The Reading-List-Solutions JISCMail list has been busy in the last week talking about the various systems (often developed in house). And it was really fascinating to see how such a fundamental and time-consuming part of our daily work hasn’t really been solved, let alone integrated completely into the procurement and discovery workflow. Although I know that there’s some significant complexity there I find that particularly strange that it hasn’t been a feature built into the LMS.
Final thoughts or library systems of the future
It seems to me that there are some general principles that you could think about for future library systems in this space. And I suppose I’m thinking beyond the next generation of systems such as Alma. And these may be completely of-the-wall ideas. But there are few things that come to mind as we move towards 2020. So what might a 2020 LMS look like?
> the systems are component’ised (think Drupal CMS), so both libraries and users can choose which components they use. And they are largely about flowing data, workflow and process rather than about storing data.
> users control their own profiles (and data) – we (institutions) give them a ‘key’ to access collections we have paid for (so authentication is at a network level or with aggregators?)
> catalogues are distributed – linked data uses the most appropriate vocabularies, most not even run by libraries – local elements are added at the time you choose to procure – there is no ‘catalogue record’ as such but a collection of descriptive elements – you choose where you get your records from, but you don’t download them to ‘your’ lms database
> discovery interface is at the choice of the user – collections are packaged/streamed? and contributed to the aggregators
> rather than a model where libraries buy licensed content and then run systems for their users to access that content – so all institutions largely duplicate their systems – the content owners/aggregators provide the access maybe? as they already start to do with discovery systems?
> there is a ‘rump’ of an LMS database that is your audit trail of transactions and holdings (but with network-level unique IDs that link to descriptive data held at the network level), statistics are held in the cloud (JUSP+++),
> so we contribute our special digital and electronic collections – either to national scale repositories or to open discovery systems?
Maybe not very realistic and fanciful, but something that is a world away from the monolithic LMS that even the open source and new generation systems seem to be building.
All round it was a really good and enjoyable workshop and I’m glad I had the opportunity to go. I hope the stuff we’ve done helps to inform the future thinking and directions. Thanks to SCONUL and JISC for organising/funding it and to Ben Showers and David Kay.