You are currently browsing the category archive for the ‘JISC’ category.

One of the first Bird flocks and sunsetprojects I worked on at the OU was a Jisc-funded project called Telstar. Telstar built a reference management tool, called MyReferences, integrating RefWorks into a Moodle Virtual Learning Environment (VLE).  Well, that MyReferences tool shortly reaches, what the software people call ‘End-of-Life’, and the website world like to refer to as ‘Sunsetting’, in other words, MyReferences is closing down later this month.  So it seemed like a good time to reflect on some of the things I’ve learnt from that piece of work.

In a lot of ways several things that Telstar and MyReferences did have now become commonplace and routine.  References were stored remotely in the RefWorks platform (we’d now describe that as cloud-hosted) and that’s almost become a default way of operating whether you think of email with Outlook365 or library management systems such as ExLibris Alma.    Integration with moodle was achieved using an API, again, that’s now a standard approach.  But both seemed quite a new departure in 2010.

I remember it being a complex project in lots of ways, creating integrations not just between RefWorks and Moodle but also making use of some of the OpenURL capabilities of SFX.  It was also quite ambitious in aiming to provide solutions applicable to both students and staff.  Remit (the Reference Management Integration Toolkit) gives a good indication of some of the complexities not just in systems but also in institutional and reference management processes.   The project not only ran a couple of successful Innovations in Reference Management events but led to the setup of a JiscMail reading list systems mailing list.

Complexity is the main word that comes to mind when thinking about some of the detailed work that went into mapping reference management styles between OU Harvard in RefWorks and MyReferences to ensure that students could get a simplified reference management system in MyReferences without having to plunge straight into the complexity of full-blown RefWorks.  It really flagged for me the implications of not having standard referencing styles across an institution but also the impact of not adopting a standard style already well supported but of designing your own custom institutional style.  One of the drawbacks of using RefWorks as a resource list system was that each reference in each folder was a separate entity meaning that any changes in a resource (name for example) had to be updated in every list/folder.  So it taught us quite a bit about what we ideally wanted from a resource list management/link management system.

Reference management has changed massively in the past few years with web-based tools such as Zotero, Refme and Mendeley becoming more common, and Microsoft Office providing support for reference management.  So the need to provide institutional systems maybe has passed when so many are available on the web.   And I think it reflects how any tool or product has a lifecycle of development, adoption, use and retirement.  Maybe that cycle is now much shorter than it would have been in the past.



Jisc elevator website screenshotIt was great to see this week that the latest opportunity on the Jisc Elevator website is one for students to pitch ideas about new technology ideas.  That’s really nice to see something that involves students in coming up with ideas and backing it up with a small amount of money to kickstart things.

Using students as co-designers for library services and in particularly in relation to websites and technology is something that I’m finding more and more compelling.  A lot of the credit for that goes to Matthew Reidsma from Grand Valley State University in US, whose blog ‘Good for whom?‘ is pretty much essential reading if you’re interested in usability and improving the user experience.   I’m starting to see that getting students involved in co-designing services is the next logical step on from usability testing.  So instead of a process where you design a system and then test it on users, you involve them from the start, by asking them what they need, maybe then getting them to feedback on solution designs and specifications and then going through the design process of prototyping, testing and iterating, by getting them to look at every stage.  Something that an agile development methodology particularly lends itself to.  Examples where people have started to employ students on the staff to help with getting that student ‘voice’ are also starting to appear.

There are some examples of fairly recent projects where Universities have been getting students (and others outside the institution) involved in designing services, so for example the Collaborate project at Exeter that looked at using students and employers to design ’employability focussed assessments’.  There is also Leeds Metropolitan with their PC3 project on the personalised curriculum and Manchester Metropolitan’s ‘Supporting Responsive Curricula’ project.    And you can add to that list of examples the Kritikos project at Liverpool that I blogged about recently.

For us, with our focus on websites and improving the user experience we’ve been working with a group of students to help us with designing some tools for a more personalised library experience.  I blogged a bit about it earlier in the year.   We’re now well into that programme of work and have put together a guest blog post for Jisc’s LMS Change project blog ‘Personalisation at the Open University’. Thanks to Ben Showers from Jisc and Helen Harrop from the LMS Change project for getting that published.  Credit for the work on this (and the text for the blog post) should go to my colleagues: Anne Gambles, Kirsty Baker and Keren Mills.  Having identified some key features to build we are well into getting the specification for the work finalised and start building the first few features soon.   It’s been an interesting first foray into working with students as co-designers and one I think has major potential for how we do things in the future.

Photograph of documents from ALTCA quick trip to Manchester yesterday to take part in a Symposium at ALT-C  on ‘Big Data and Learning Analytics’ with colleagues from the OU (Simon Buckingham Shum, Rebecca Ferguson, Naomi Jeffrey and Kevin Mayles) and Sheila MacNeill from JISC CETIS (who has blogged about the session here).

It was the first time I’d been to ALT-C and it was just a flying visit on the last morning of the conference so I didn’t get the full ALT-C experience.  But I got the impression of a really big conference, well-organised and with lots of different types of sessions going on.  There were 10 sessions taking place at the time we were on, including talks from invited speakers.  So lots of choice of what to see.

But we had a good attendance at the session and there seemed a good mix of people and a good debate and questions during the symposium.  Trying to both summarise an area like Learning Analytics and also give people an idea of the range of activities that are going on is tricky in a one-hour symposium but hopefully gave enough of an idea of some of the work taking place and some of the issues and concerns that there are.

Cross-over with other areas
Sheila had a slide pointing out the overlaps between the Customer Relationship Management systems world, Business Intelligence and Learning Analytics, and it struck me that there’s also another group in the Activity Data world that crosses over.  Much of the work I mentioned (RISE and Huddersfield’s fanstastic work on Library impact)  came out of JISC’s Activity Data funding stream and some of the synthesis project work has been ‘synthesised’ into a website ‘Exploiting activity data in the academic environment’ Many of the lessons learnt that are listed here, particularly around what you can do with the data, are equally relevant to Learning Analytics.  JISC are also producing an Activity Data report in the near future.

Interesting questions
A lot of the questions in the session were as much around the ethics as the practicality.   Particularly interesting was the idea that there were risks of Learning Analytics in encouraging a view that so much could be boiled down to a set of statistics, which sounded a bit like norms to me. The sense-making element seems to be really key, as with so much data and statistics work.

I’d talked a bit about also being able to use the data to make recommendations, something we had experimented with in the RISE project. It was interesting to hear views about the dangers of them reducing rather than expanding choice by narrowing the choices as people are encouraged to select from a list of recommendations which reinforces the recommendations leading to a loop.  If you are making recommendations based on what people on a course looked at then I’d agree that it is a risk, especially as I think there’s a huge probability that people are often going to be looking at resources that they have to look at for their course anyway.

When it comes to other types of recommendations (such as people looking at this article also viewed this other article, and people searching for this search term look at these items) then there is still some chance of recommendations reinforcing a narrow range of content, but I’d suggest that there is still some chance of serendipitous discovery of material that you might not ordinarily have seen.  I’m aware that we’ve very much scratched the surface with recommendations and used simple algorithms that were designed around the idea that the more people who viewed that pattern the better the recommendation.  But it may be that more complex algorithms that throw in some ‘randomness’ might be useful.

One of the elements I think that is useful about the concept of recommendations is that people largely accept them (and perhaps expect them) as they are ubiquitous in sites like Amazon.  And I wonder if you could almost consider them as a personalisation feature that indicates that your service is modern and up-to-date and is engaging with users.  For many library systems that still look to be old-fashioned and ‘librarian’-orientated then perhaps it is equally important to be seen to have these types of features as standard.

Update: Slides from the introductory presentation are here

Harvard Elevator Pitch screenshotOne of the really useful things about being involved with JISC-funded projects is that you get to take part in programme meetings and they often lead to finding out about interesting tools that I probably wouldn’t otherwise have come across.  So last week I was with the STELLAR project team that went to the programme meeting for the ‘Enhancing the Sustainability of Digital Content’ programme meeting, and we were introduced to the Harvard Business School Elevator Pitch Builder tool.  For anyone who hasn’t come across the ‘Elevator Pitch’ the idea is that you have the length of a journey in an elevator (lift) to make your pitch, for your project or idea.  The thinking being that you might be in a lift with the Vice Chancellor and he asks ‘what do you do?’   Essentially it is a tool to get you to structute and organise a succinct pitch that gets across the key points of what you want to say.

Harvard’s Elevator Pitch tool gets you to create some text to answer WHO, WHAT, WHY and GOAL, then analyses your pitch in terms of the number of words, time it will take to say and how many words are repeated.  The tool suggests suitable words that you might want to use to get the attention of the person you are speaking to. It’s a good tool to use to get a nicely structured pitch for a project.

JISC programme meetings are a really useful part of being involved in a JISC project.  You generally get the chance to find out at an early stage what the other projects in your programme strand are working on (in our case a range of digital content, from UK Web Archive big data through to archaeology, geospatial and botanical content). That can be really useful as you can find where there is common ground and make a lot of useful contacts amongst people working on similar things.  So we’ve got a few contacts to follow up in the digital libraries area.  And JISC programme managers are really useful people to know as they have a great breadth of knowledge of what is going on in several areas of work.

Lorcan Dempsey’s slides and video from the ‘Squeezed Middle’ have now been made available.  The slides are on the OCLC website here and the video of the presentation is on YouTube here and embedded below.

The video was used during the ‘Squeezed Middle’ workshop to introduce the initial piece of work to look at trends in terms of Collections, Space, Systems and Expertise/Services. 

Reflections on the presentation
The contrast between between libraries that grew up at an institutional scale and now face challenges from organisations that are the product of the network webscale environment was interesting to hear articulated in this way.  Drawing lessons from how other industries have had to adapt, Lorcan referenced John Hagel’s ‘Unbundling the corporation’ paper from Harvard Business Review from 1999 and talked about the trend towards greater specialisation.  Talking about three elements of customer engagement, innovation and infrastructure, and offering up a number of interesting examples from the University of Michigan and elsewere, Lorcan offered the view that priorities for libraries should be around engagement and innovation, with reducing effort going into infrastructure. 

I was quite interested to hear the Discovery solutions characterised as ‘data wells’ with an intriguing question about what other content aggregators such as Thomson and Elsevier might do.  Picking up on the point about a key factor being ‘disclosure’ of your content to the network-scale services such as Google Books and Google Scholar (with a comment about 75% of Minnesota’s SFX requests coming from outside the institution), it does make me wonder what the longer-term role might be of the current generation of ‘Discovery’ services.

Following on from the JISC/SCONUL ‘Squeezed Middle’ workshop that I blogged about earlier.  Paul Stainthorp has blogged about his experience and included the paper he presented on his blog here.  Ben Showers, from JISC, has also blogged about the event here on the JISC Digital Infrastructure Team blog.   Links to Ken Chad’s [update to the update: now available here] and David Kay’s provocations/presentations and Lorcan Dempsey’s video are also promised.  There’s also a useful list of the priorities that came out of the workshop, put together by David Kay, here.  This list sets out the priorities in five different areas: ebooks, non-traditional assets, end-user applications, library roles and above campus services.

New JISC call
One of the motivations behind the workshop was to help to inform (both JISC and the HE community) about a new JISC call (12/01) that includes a couple of LMS strands.  One covers a project to create “a new vision for the future of library systems and a ‘roadmap’ for the delivery of that vision”.  There certainly seems to be a lot more activity in the LMS systems area at the moment with new products, open source solutions and shared systems.    The second strand covers a set of “pathfinder projects to investigate a broad range of potential new models and approaches to library systems and services”.  The themes within this area cover Shared library systems, emerging tools and technologies and emerging library systems opportunities.  There are quite a wide range of different aspects touched on in the call paper, ranging through reference management to data.  A lot of potential for some interesting ideas to emerge.

JISC Activity Data programme and Learning Analytics
A couple of things this week about the activity data projects that JISC funded last year as part of their Information Environment programme. I noticed that Huddersfield are going to be doing some more work on LIDP (the Library Impact Data Project) over the next few LIDP phase 2months. This phase two includes work on more data sources and a possible data shared service. The screenshot on the left lists the work they are planning to do. More details on their blog. It will be interesting to see how this goes.

On Tuesday this week we did a short lunchtime session for library and other OU staff on the work we did last year on the RISE activity data project. So I did a short presentation on what we did in the project, and Liz (@LizMallett) covered the user evaluation and feedback. We also had a presentation by Will Woods (@willwoods) from IET on the University’s work around Learning Analytics.  Learning Analytics has now become an important project for the university and it is interesting to see how this moves forward in the next few months.   There is a short blog post on the event on the RISE blog here that includes embedded links to the presentations on RISE.  

Moving forward with Activity Data
Since RISE finished we’ve been looking at ways of embedding some of the recommendation ideas into our mainstream services. We’ve still been routinely adding EZProxy data into the RISE database.  At the moment we are moving the RISE prototype search interface and the Google gadget across to a new web server as we are closing down the old library website. That should keep the search prototype running for a bit more time. It’s also a chance to tweak the code and sort out any bits that have degraded. 

Our website developer (@beersoft) has been building some new features based on the ideas around using activity data. The live library website already displays dynamic lists of resources at a title level in the library resources section on the website

One of the prototypes takes the standard resource lists (which are at a title level) and shows the most recently viewed articles from those journals, using the data frscreenshot of beta search including activity data from RISEom the RISE database. The screenshot shows one of the current prototypes.  So users would not only see the relevant journal title (with a link at the title level), but would also see the most recently viewed articles from that journal.  For users that are logged in it would also be feasible to show the articles viewed by people on their course, or even their own recently viewed articles.

We’ve been starting to think about how best to present these new ideas on the website as we want to gauge user reactions to them  Thinking at the moment is that we want to keep them separate from the ‘production’ spec services, so would have them in a separate ‘innovation’ or ‘beta’ space.  I quite like the Cornell CULLABS or the Harvard Library Innovation Lab as a model to follow.

The proposition
I’ve spent the last couple of days at a fascinating JISC/SCONUL workshop, ‘The Squeezed Middle? Exploring the future of Library Systems.  ‘The Squeezed Middle’ referring to the concentration of attention in recent months on electronic resource management (in the SCONUL Shared Services and Knowledge Base + activities) and Discovery Systems (such as Summon, EDS and Primo), that has rather taken the focus away from other library systems, notably the Library Management System.  In part, it was explained, this was deliberate, as developments in open source LMS (such as Kuali OLE and Evergreen) and developments of new systems such as Alma from ExLibris that look at unifying print, electronic and digital resource management, have been (and still are) in development and there needs to be some maturity.  But we are now starting to see these developments moving on and open source starting to be adopted (by Staffordshire University library for example).  So the time is right to start to focus on these systems afresh.

The workshop
Punctuating the workshop were a series of deliberately provocative and challenging ‘visions’ of the future library of 2020 and a video from Lorcan Dempsey.  [Paul Walk has blogged his here.] Against this background we looked at several topics such as collections, space, systems and expertise around the library systems domain.  Overnight we looked at a series of sixty-odd themes and activities and followed that up today looking at prioritisation and value of those activities to try to understand what might be some priority tasks.

A few things came to mind for me during and after the workshop.  Firstly, there maybe isn’t a clear definition of the boundaries of this space and really no common view of what aspects of print/electronic/digital processes and collections we are scoping and addressing.  It also struck me that a lot of the issues, concerns and priorities were about data rather than systems or processes.  So they included topics such as licenses for ebooks, open bibliographic metadata, passing data to institutional finance systems and activity data for example.  I do find it particularly interesting that despite the effort that goes into the data that libraries consume, there are some really big tasks to address to flow data around our systems without duplication or unnecessary activity.  (Incidentally, there’s a concept used in Customer Care, termed ‘Unnecessary contact’ and there used to be a National Indicator NI14 where local government had to reduce unnecessary contact.  In other words reduce the instances where customers have to contact you for further clarification. So you aim to deal with the issue at first point of contact.  I start to wonder whether there’s a similar concept that we might apply to libraries when we carry out extra processing and cataloguing instead of taking ‘shelf-ready’ books and downloaded bibliographic records – unneccessary refinements maybe?)

I also found it interesting how the topic of reading list solutions came up as a hot issue.  It’s a particular interest to me given involvement in the OU’s TELSTAR reference management project. The Reading-List-Solutions JISCMail list has been busy in the last week talking about the various systems (often developed in house).  And it was really fascinating to see how such a fundamental and time-consuming part of our daily work hasn’t really been solved, let alone integrated completely into the procurement and discovery workflow. Although I know that there’s some significant complexity there I find that particularly strange that it hasn’t been a feature built into the LMS.

Final thoughts or library systems of the future

It seems to me that there are some general principles that you could think about for future library systems in this space.  And I suppose I’m thinking beyond the next generation of systems such as Alma.  And these may be completely of-the-wall ideas.  But there are few things that come to mind as we move towards 2020. So what might a 2020 LMS look like?

> the systems are component’ised (think Drupal CMS), so both libraries and users can choose which components they use.  And they are largely about flowing data, workflow and process rather than about storing data.

> users control their own profiles (and data) – we (institutions) give them a ‘key’ to access collections we have paid for (so authentication is at a network level or with aggregators?)

> catalogues are distributed – linked data uses the most appropriate vocabularies, most not even run by libraries – local elements are added at the time you choose to procure – there is no ‘catalogue record’ as such but a collection of descriptive elements – you choose where you get your records from, but you don’t download them to ‘your’ lms database

> discovery interface is at the choice of the user – collections are packaged/streamed? and contributed to the aggregators

> rather than a model where libraries buy licensed content and then run systems for their users to access that content – so all institutions largely duplicate their systems – the content owners/aggregators provide the access maybe? as they already start to do with discovery systems?

> there is a ‘rump’ of an LMS database that is your audit trail of transactions and holdings (but with network-level unique IDs that link to descriptive data held at the network level), statistics are held in the cloud (JUSP+++),

> so we contribute our special digital and electronic collections – either to national scale repositories or to open discovery systems?

Maybe not very realistic and fanciful, but something that is a world away from the monolithic LMS that even the open source and new generation systems seem to be building.

All round it was a really good and enjoyable workshop and I’m glad I had the opportunity to go.  I hope the stuff we’ve done helps to inform the future thinking and directions.  Thanks to SCONUL and JISC for organising/funding it and to Ben Showers and David Kay.

Latest project
From February I’m going to be involved in a new project, STELLARSemantic Technologies Enhancing the Lifecycle of LeArning Resources (funded by JISC).   In some ways the project connects with previous work I’ve been involved with in the Lucero project in that it will be employing linked data, and will be working with learning materials, in that I’ve had some involvement with our production and presentation learning systems through the VLE.  But STELLAR will be dealing with a different area for me, in that we’ll be looking at my institution’s store of legacy learning materials.   So it’s a good opportunity to learn more about curation and preservation and digital lifecycles.

STELLAR is particularly going to be looking at trying to understand the value of those legacy learning materials by talking to the academics who have been involved in creating those materials.   There are quite a few reasons why older course materials may still have value, they might be able to be reused in new courses on the basis that reusing old materials might be less costly than creating new materials.  They might have value in being able to be transformed into Open Educational Resources.  Or, for example, they might have value in being good historical examples of styles of teaching and learning.  So STELLAR will be exploring different types and models of expressing the value of those materials.

Finding out about the value that is placed on these materials can also be an important factor when trying to understand which materials to preserve as a priority, or where you should expend your resources, and we’d hope that STELLAR would help to inform HE policies as institutions build up increasing amounts of digital learning materials.

As part of STELLAR we will be taking some digital legacy learning material and transforming it into linked data (with some help from our friends in KMi). This gives us the opportunity to connect old course materials into the OU’s ecosystem by linking to existing datasets on current courses and OER material in OpenLearn.  By transforming the content in this way we can then explore whether making it more discoverable changes the value proposition, makes the content more likely to be reused or opens up other possibilities.  It should be an interesting project and one that I’m looking forward to, as there are going to be a lot of opportunties to build up my understanding of these issues and aspects.

I blogged nearly a month ago some reflections on our latest funding bid and sitting at the FOTE conference yesterday an email popped up with the outcome of the bid.  [I’m not sure why but I’m still not really used to the pervasive nature of modern email access.  I suppose although there has been remote access to systems for a long time, through dial-in, VPN and suchlike, maybe there has always been a bit of a process involved in logging into the VPN or a website and then opening up an email client that seemed a bit laborious.  Or at any rate laborious enough to be able to put off doing it.  But with tablets and smartphones email setup, email just appears along with tweets and other messages.  It just seems a bit easy now.  I guess perhaps I’m still not used to the ease of working remotely now, something you take for granted.  But sometimes when you think about it, it’s actually something pretty remarkable.]

Anyway, I was a bit surprised to hear about the outcome of the bid so quickly, but really pleased to hear that we were successful.  So, something else new to do, that’s really exciting for us and I’m sure I’ll probably blog a bit about in the next few months as we get going on project MACON.

Twitter posts



January 2021

Creative Commons License