You are currently browsing the category archive for the ‘technology’ category.
The news, reported in an article by Marshall Breeding in American Libraries, that EBSCO has decided to support a new open source library services platform is a fascinating development. To join with Kuali OLE but to develop what will essentially be a different open source product is a big development for the library technology sector. It’s particularly interesting that EBSCO has gone the route of providing financial support to an open source system, rather than buying a library systems company. The scope and timescales are ambitious, to have something ready for 2018.
Open source library management systems haven’t have the impact that systems like Moodle have had in the virtual learning environment sector and in some ways it is odd that academic libraries haven’t been willing to adopt such a system, given that universities do seem to have an appetite for open source software. Maybe open source library systems products haven’t been developed sufficiently to compete with commercial providers. Software as a Service (SaaS) is coming to be accepted now by corporate IT departments as a standard method of service provision, something that I think a couple of the commercial providers realised at quite an early stage, so it is good to see this initiative recognising that reality. It will be interesting to see how this develops
I was particularly interested in a term I came across in a blog post on innovation on the Nesta blog the other week. Innovation in the public sector: Is risk aversion a cause or a symptom? The blog post talks about Organisation Debt and Organisational Physics and is a really interesting take on why large organisations can struggle with innovation. It’s well worth a read. It starts with referencing the concept of ‘technical debt‘ described in the blog post as “… where quick fixes and shortcuts begin to accumulate over time and eventually, unless properly fixed, can damage operations.” It’s a term that tends to be related to software development but it started me thinking about how a concept of ‘technical debt’ might be relevant to the library world.
If we expand the technical debt concept to the library sector I’d suggest that you could look at at least three areas where that concept might have some resonance: library systems, library practices and maybe a third one around library ‘culture’ – potentially a combination of collections, services and something of the ‘tradition’ of what a library might be.
Our systems are a complex and complicated mix. Library management systems, E-resources management systems, discovery, openURL resolvers, link resolvers, PC booking systems etc etc It can be ten years or more between libraries changing their LMS and although, with Library Services Platforms, we are seeing some consolidation of systems into a single product, there is still a job to do of integrating legacy systems into the mix. For me the biggest area of ‘technical debt’ comes in our approach to linking and websites. Libraries typically spend significant effort in making links persistent, in coping with the transition from one web environment to the other by redirecting URLs. It’s not uncommon to have redirection processes in place to cope with direct links to content in previous websites and trying to connect users directly to replacement websites. Yet on the open web ‘link rot‘ is a simple fact of life. Trying to manage these legacy links is a significant technical debt that libraries carry I’d suggest.
I think you could point to several aspects of library practices that could fall under the category of technical debt but I’d suggest the primary one is in our library catalogue and cataloguing practices. Our practices change across the years but overall the quality of our older records are often lower than what we’d want to see. Yet we typically carry those records across from system to system. We try to improve them or clean them up, but frequently it’s hard to justify the resource being spent in ‘re-cataloguing’ or ‘retrospective cataloguing’. Newer approaches making use of collective knowledge bases and linking holdings to records has some impact on our ability to update our records, but the quality of some of the records in knowledge bases can sometimes also not be up to the level that libraries would like.
You could also describe some other aspects of the library world as showing the symptoms of technical debt. Our physical collections of print resources, increasingly unmanaged and often unused as constrained resources are directed to higher priorities, and more attention is spent on building online collections of ebooks for example. You even, potentially see a common thread with the whole concept of a ‘library’ – the popular view of a library as a place of books means that while libraries develop new services they often struggle to change their image to include the new world.
The end of 2015 and the start of 2016 seems to have delivered a number of interesting reports and presentations relevant to the library technology sphere. So Ken Chad’s latest paper ‘Rethinking the Library Services Platform‘ picks up on the lack of interoperability between library systems as does the new BiblioCommons report on the public library sector ‘Essential Digital Infrastructure for Public Libraries in England‘ commenting that “In retail, digital platforms with modular design have enabled quickly-evolving omnichannel user experiences. In libraries, however, the reliance on monolithic, locally-installed library IT has deadened innovation”.
As ‘Rethinking the Library Services Platform‘ notes, in many ways the term ‘platform’ doesn’t really match the reality of the current generation of library systems. They aren’t a platform in the same way as an operating system such as Windows or Android, they don’t operate in a way that third-parties can build applications to run on the platform. Yes, they offer integration to financial, student and reference management systems but essentially the systems are the traditional library management system reimagined for the cloud. Much of the changes are a consequence of what becomes possible with a cloud-based solution. So their features are shared knowledge bases, with multi-tenanted applications shared by many users as opposed to local databases and locally installed applications. The approach from the dwindling number of suppliers is to try to build as many products as possible to meet library needs. Sometimes that is by developing these products in-house (e.g. Leganto) and sometimes by the acquisition of companies with products that can be brought into the supplier’s eco-system. The acquisition model is exactly the same as that practised by both traditional and new technology companies as a way of building their reach. I’m starting to view the platform as much more in line with the approach that a company like Google will take with a broad range of products aiming to secure customer loyalty to their ecosystem rather than that of another company. So it may not be so surprising that technology innovation, which to my mind seems largely to be driven by vendors innovating to deliver to what they see as being library needs and shaped by what vendors think they see as an opportunity, isn’t delivering the sort of platform that is suggested. As Ken notes, Jisc’s LMS Change work discussed back in 2012 the sort of loosely-coupled, library systems component approach giving libraries to ability to integrate different elements to give the best fit to their needs from a range of options. But in my view options have very much narrowed since 2012/13.
The BiblioCommons report I find particularly interesting as it includes within it an assessment of how the format silos between print and electronic lead to a poor experience for users, in this case how ebook access simply doesn’t integrate into OPACs, with applications such as Overdrive being used that are separate to the OPAC, e.g. Buckinghamshire library services ebooks platform, and their library catalogue are typical. Few if any public libraries will have invested in the class of discovery systems now common in Higher Education (and essentially being proposed in this report), but even with discovery systems the integration of ebooks isn’t as seamless as we’d want, with users ending up in a variety of different platforms with their own interfaces and restrictions on what can be done with the ebook. In some ways though, the public library ebook offer, that does offer some integration with the consumer ebook world of Kindle ebooks, is better then the HE world of ebooks, even if the integration through discovery platforms in HE is better. What did intrigue me about the proposal from the BiblioCommons report is the plan to build some form of middleware system using ‘shared data standards and APIs’ and that leads to wondering whether that this can be part of the impetus for changing the way that library technology interoperates. The plan includes in section 10.3 the proposal to ‘deliver middleware, aggregation services and an initial complement of modular applications as a foundation for the ecosystem, to provide a viable pathway from the status quo towards open alternatives‘ so maybe this might start to make that sort of component-based platform and eco-system a reality.
Discovery is the challenge that Oxford’s ‘Resource Discovery @ The University of Oxford‘ report is tackling. The report by consultancy, Athenaeum 21, looks at discovery from the perspective of a world-leading research institution, with large collections of digital content and looks at connecting not just resources but researchers with visualisation tools of research networks, advanced search tools such as elastic search. The recommendations include activities described as ‘Mapping the landscape of things’, Mapping the landscape of people’, and ‘Supporting researchers established practices’. In some ways the problems being described echo the challenge faced in public libraries of finding better ways to connect users with content but on a different scale and includes other cultural sector institutions such as museums.
I also noticed a presentation from Keith Webster from Carnegie Mellon University ‘Leading the library of the future: W(h)ither technical services? This slidedeck takes you through a great summary of where academic libraries are now and the challenges they face with open access, pressure on library budgets and changes in scholarly practice. In a wide-ranging presention it covers the changes that led to the demise of chains like Borders and Tower records and sets the library into the context of changing models of media consumption. Of particular interest to me were the later slides about areas for development that, like the other reports, had improving discovery as part of the challenge. The slides clearly articulate the need for innovation as an essential element of work in libraries (measured for example as a % of time spent compared with routine activities) and also of the value of metrics around impact, something of particular interest in our current library data project.
Four different reports and across some different types of libraries and cultural institutions but all of which seem to me to be grappling with one issue – how do libraries reinvent themselves to maintain a role in the lives of their users when their traditional role is being erroded or when other challengers are out-competing with libraries – whether through improving discovery or by changing to stay relevant or by doing something different that will be valued by users.
We’ve started using BrowZine (browzine.com) as a different way of offering access to online journals. Up until recently there were iOS and Android app versions but they have now been joined by a desktop version.
BrowZine’s interesting as it tries to replicate the experience of browsing recent copies of journals in a physical library. It links into the library authentication system and is fed with a list of library holdings. There are also some open access materials in the system.
You can browse for journals by subject or search for specific journals and then view the table of contents for each journal issue and link straight through to the full-text of the articles in the journals. In the app versions you can add journal titles to your personal bookshelf (a feature promised for the desktop version later this year) and also see when new articles have been added to your chosen journals (shown with the standard red circle against the journal on the iOS version).
A useful tool if there are a selection of journals that you need to keep up to date with. Certainly the ease with which you can connect with the full-text contrasts markedly with some of the hoops that we seem to expect users to cope with in some other library systems.
One of the interesting features of our new library game OpenTree for me is that it is possible to engage with it in a few different ways. Although at one level it’s about a game, with points and badges for interacting with the game and with library content, resources and webpages. It’s social so you can connect with other people and review and share resources.
But, as a user you can choose the extent that you want to share. So you can choose to share your activity with all users in OpenTree, or restrict it so only your friends can see your activity, or choose to keep your activity private. You can also choose whether or not things you highlight are made public.
So you’d wonder what value you’d get in using it if you make your activity entirely private. But you can use it as a way of tracking which library resources you are using. And you can organise them by tagging them and writing notes about them so you’ve got a record of the resources you used for a particular assignment. You might want to keep your activity private if you’re writing a paper and don’t want to share your sources or if you aren’t so keen on social aspects.
If you share your activities with friends and maybe connect with people studying the same module as you, then you could see some value in sharing useful resources with fellow students you might not meet otherwise. In a distance-learning institution with potentially hundreds of students studying your module, students might meet a few students in local tutorials or on module forums but might never connect with most people following the same pathway as themselves.
And some people will be happy to share, will want to get engaged with all the social aspects and the gaming aspects of OpenTree. It will be really interesting to see how users get to grips with OpenTree and what they make of it and to hear how people are using it.
It will particularly be interesting to see how our users engagement with it might differ from versions at bricks-and-mortar Universities at Huddersfield, Glasgow and Manchester. OpenTree’s focus is online and digital so doesn’t include loans and library visits, and our users are often older, studying part-time and not campus-based.
In early feedback, we’re already seeing a sense that some of the game aspects, such as the Subject leaderboard is of less interest than expected. Maybe that reflects students being focused around outcomes much more, although research seems to suggest (Tomlinson 2014 ‘Exploring the impact of policy changes on students’ attitudes and approaches to learning in higher education’ HEA) that this isn’t just a factor for part-time and distance-learning students as a result of increased university fees and student loans. It might also be that because we haven’t gone for an individual leaderboard that there’s less personal investment, or just that users aren’t so sure what it represents.
One of the projects that we’ve been working on as part of our Library Futures programme has been a product called OpenTree. OpenTree is based on the Librarygame software from a small development team at ‘Running in the Halls’. Librarygame adds gaming and social aspects to student engagement with library services.
Librarygame was developed originally as Lemontree for Huddersfield University (https://library.hud.ac.uk/lemontree/) and then updated and adopted as librarytree and BookedIn for Glasgow and Manchester Universities respectively (https://librarytree.gla.ac.uk/ and https://bookedin.manchester.ac.uk/).
Being originally based around engagement with physcial libraries and taking data from library loans from the library management system, or from physical library visits, via building access logs, the basic game model had to change a bit for a distance-learning University where students don’t visit the University library or borrow books.
OpenTree gives users points for accessing resources and points build up into levels in the game. Activities such as making friends, reviewing, tagging and sharing resources also get you badges in the game. We’ve also added in a Challenges section to highlight activities to encourage users to try out different things, trying Being Digital, for example.
Because it lists library resources you’ve been accessing I’ve already been finding it useful as a way of organising and remembering library resources I’ve been using, so we’re hopeful that students will also find it useful and really get into the social aspects.
OpenTree launches to students in the autumn but is up-and-running in beta now. A video introducing OpenTree is on YouTube at: https://www.youtube.com/watch?v=yeSU0FwVNvU
We’re really looking forward to seeing how students get on with OpenTree and already have a few thoughts about enhancements and developments, and no doubt other ideas will come up once more people start using it.
So we’re slowly emerging from our recent LMS project and a bit of time to stop and reflect, partly at least to get project documentation for lessons learned and suchlike written up and the project closed down. We’ve moved from Voyager, SFX and EBSCO Discovery across to Alma and Primo. We went from a project kick off meeting towards the end of September 2014 to being live on Alma at the end of January 2015 and live on Primo at the end of April.
So time for a few reflections about some of the things to think about from this implementation. I’d worked out the other day that it has been the fifth LMS procurement/implementation process I’ve been involved with, and doing different roles and similar roles in each of them. For this one I started as part of the project team but ended leading the implementation stage.
Tidy up your data before you start your implementation. Particularly your bibliographic data but if you can other data too. You might not be able to do so if you are on an older system as you might not have the tools to sort out some of the issues. But the less rubbish you can take over to your nice new system the less sorting out you’ve got to do on the new system. And when you are testing your initial conversion too much rubbish makes it harder to see the ‘wood for the trees’, in other words work out what are problems that you need to fix by changing the way the data converts and what is just a consequence of poor data. With bibliographic data the game has changed, you are now trying to match your data with a massive bibliographic knowledge base.
It might be ideal to plan to go live with both LMS and Discovery at the same time but it’s hard to do. The two streams often need the same technical resources at the same time. Timescales are tight to get everything sorted in time. We decided that we needed to give users more notice of the changes to the Discovery system and make sure there was a changeover period by running in parallel.
You can move quickly. We took about four months from the startup meeting to being live on Alma but it means that you have a very compressed schedule. Suppliers have a well-rehearsed approach and project plan but it’s designed as a generic approach. There’s some flexibility but it’s deliberately a generic tried-and-tested approach. You have to be prepared to be flexible and push things through as quickly as possible. There isn’t much time for lots of consultation about decisions, which leads to…
As much as possible, get your decisions about changes in policies and new approaches made before you start. Or at least make sure that the people on the project team can get decisions made quickly (or make them themselves) and can identify from the large numbers of documents, guidance and spreadsheets to work through, what the key decisions you need to make will be.
Get the experts who know about each of the elements of your LMS/Discovery systems involved with the project team. There’s a balance between having too many and too few people on your project team but you need people who know about your policies, processes, practices and workflows, your metadata (and about metadata in general in a lot of detail to configure normalisation, FRBR’ization etc etc), who know about your technology and how to configure authentication and CSS. Your project team is vital to your chances of delivering.
Think about your workflows and document them. Reflect on them as you go through your training. LMS workflows have some flexibility but you still end up going through the workflows used by the system. Whatever workflows you start with you will no doubt end up changing or modifying them once you are live.
Training. Documentation is good. Training videos are useful and have the advantage of being able to be used whenever people have time. But you still need a blended approach, staff can’t watch hours of videos, and you need to give people training about how your policies and practices will be implemented in the new LMS. So be prepared to run face to face sessions for staff.
Regular software updates. Alma gets monthly software updates. Moving from a system that was relatively static we wondered about how disruptive it would be. Advice from other Libraries was that it wasn’t a problem. And it doesn’t seem to be. There are new updated user guides and help in the system and the changes happen over the weekend when we aren’t using the LMS.
It’s Software as a Service so it’s all different. I think we were used to Discovery being provided this way so that’s less of a change. The LMS was run previously by our corporate IT department so in some senses it’s just moved from one provider to another. We’ve a bit less control and flexibility to do stuff with it but OK, and on the other hand we’ve more powerful tools and APIs.
Analytics is good and a powerful tool but build up your knowledge and expertise to get the best out of it. We’ve reduced our reports and do a smaller number than we’d thought we need. Scheduled reports and widgets and dashboards are really useful and we’re pretty much scratching the surface of what we can do. Access to the community reports that others have done is pretty useful especially when you are starting.
Contacts with other users are really useful. Sessions talking to other customers, User Group meetings and the mailing lists have all been really valuable. An active user community is a vital asset for products not just the open source ones.
and finally, Reflection Twelve
We ran a separate strand to do some user research with students into what users wanted from library search. This was really invaluable as it gave us evidence to help in the procurement stage, but particularly it helped shape the decisions made about how to setup Primo. We’ve been able to say: this is what library users want and we have the evidence about it. And that has been really important in being able to challenge thinking based on what us librarians think users want (or what we think they should want).
So, twelve reflections about the last few months. Interesting, enlightening, enjoyable, frustrating at times, and tiring. But worthwhile, achievable and something that is allowing us to move away from a set of mainly legacy systems, not well-integrated, not so easy to manage to a set of systems that are better integrated, have better tools and perhaps as important have a better platform to build from.
At the end of November I was at a different sort of conference to the ones I normally get to attend. This one, Design4learning was held at the OU in Milton Keynes, but was a more general education conference. Described as “The Conference aims to advance the understanding and application of blended learning, design4learning and learning analytics ” Design4learning covered topics such as MOOCs, elearning, learning design and learning analytics.
There were a useful series of presentations at the conference and several of them are available from the conference website. We’d put together a poster for the conference talking about the work we’ve started to do in the library on ‘library analytics’ – entitled ‘Learning Analytics – exploring the value of library data and it was good to talk to a few non-library people about the wealth of data that libraries capture and how that can contribute to the institutional picture of learning analytics.
Our poster covered some of the exploration that we’ve been doing, mainly with online resource usage from our EZProxy logfiles. In some cases we’ve been able to join that data with demographic and other data from surveys to start to look in a very small way at patterns of online library use.
The poster also highlighted the range of data that libraries capture and the sorts of questions that could be asked and potentially answered. It also flagged up the leading-edge work by projects such as Huddersfield’s Library Impact Data Project and the work of the Jisc Lamp project.
An interesting conference and an opportunity to talk with a different group of people about the potential of library data.
For me two big themes came to mind after this year’s Future of Technology in Education Conference (FOTE). Firstly, around creativity, innovation and co-creation; and secondly about how fundamental data and analytics is becoming.
Creativity, innovation and co-creation
Several of the speakers talked about innovation and creativity. Dave Coplin talked of the value of Minecraft and Project Spark and the need to create space for creativity, while Bethany Koby showed us examples of some of the maker kits ‘Technology Will Save Us’ are creating.
Others talked of ‘flipping the classroom’ and learning from students as well as co-creation and it was interesting in the Tech start-up pitchfest that a lot of the ideas were student-created tools, some working in the area of collaborative learning.
Data and analytics
The second big trend for me was about analytics and data. I was particularly interested to see how many of the tools and apps being pitched at the conference had an underlying layer of analytics. Evaloop which was working in the area of student feedback, Knodium – a space for student collaboration, Reframed.tv – offering interaction and sharing tools for video content, Unitu – an issues tracking tool and MyCQs – a learning tool, all seemed to make extensive use of data and analytics, while Fluency included teaching analytics skills. It is interesting to see how many app developers have learnt the lessons of Amazon and Google of the value of the underlying data.
Final thoughts and what didn’t come up at the conference
I didn’t hear the acronymn MOOC at all – slightly surprising as it was certainly a big theme of last year’s conference. Has the MOOC bubble passed? or is it just embedded into the mainstream of education? Similarly Learning Analytics (as a specific theme). Certainly analytics and data was mentioned (as I’ve noted above) but of Learning Analytics – not a mention, maybe it’s embedded into HE practice now?
Final thoughts on FOTE. A different focus to previous years but still with some really good sessions and the usual parallel social media back-channels full of interesting conversations. Given that most people arrived with at least one mobile device, power sockets to recharge them were in rather short supply.