Absence from blogging over the last few months feels very much like some form of winter hibernation but it’s mainly been a case of not having too much time for reflection in the middle of a library management system implementation.  We haven’t quite finished yet but are a long way through the proceBeadnell wadersss and have been live on a cloud-based LMS for just over a month. So I can try to put together some early thoughts about the process and experience.

Time

I worked on our project proposal around Christmas 2012 for a project we termed Library Futures that included a library management system and discovery procurement and implementation.  But that wasn’t really the start of the process.  We’d spent a bit of time looking at what our needs were and working with some consultants to get a better idea of the best options for us.  I’d also had some involvement with the Jisc LMS Change project, all of which helped us to understand what was out there and what our options were.  So that takes us back into 2011 and maybe a bit earlier.  And a lot of the thinking was about the best timing for changing systems as the LMS market was in the early stages of the ‘Software as a Service’ reinvention and products were (and maybe still are) at an early stage.  So by my rough calculation that’s a couple of years in the planning, followed by a year to secure approval, followed by an eighteen month or so procurement and implementation stage.  It takes a long time and a lot of effort, and the final stage of implementation isn’t the most time-consuming part.

Process
In the procurement stage we went the full EU tender route and for our requirements catalogue (specification) made extensive use of the LibTechRFP exemplars http://libtechrfp.wikispaces.com/ not just the UK Core Specification but also the examples for the Library Services Platform, Electronic Resources Management and Search and Discovery.  And we also needed to add in our own requirements and cut out features aimed more at a traditional ‘physical’ university.  It ended up with quite a large and detailed catalogue of requirements.  But I’ve always felt that to be important for library systems as the detail is vital (and not just because the successful tender response forms part of our contract).  Library management systems have to cover a lot of functions and it’s important to get the detail to understand what using that system will mean for you in practice.  Interesting to me though was to find some of our search requirements already getting reused in another systems requirement document in the institution.
Tools
I’m always on the lookout for useful new tools for projects, website and so on.  So it was good to see a tool like Basecamp being used by the supplier we chose. It isn’t a free tool (other than for an initial period) but it worked well as a way of sharing files and having the sort of discussions that you need when going through the implementation process.  I felt the to do list feature worked a bit less well.  As a communication tool it worked neatly without being too formal or time-consuming.  We’ve ended up using it on two different projects with two entirely different suppliers so it is obviously doing something right.

Other thoughts
Final thoughts for the moment are about the range of skills needed in a team putting in an LMS.  Some obvious ones such as systems and IT knowledge, procurement and project management, and for libraries obvious areas such as knowledge of the library acquisitions, cataloguing/metadata and circulation processes.  But also ones that can get overlooked around training expertise, administrative support, decision making, business analysis and data quality.  And above all some determination and team spirit to get through an immense to do list.

The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 11,000 times in 2014. If it were a concert at Sydney Opera House, it would take about 4 sold-out performances for that many people to see it.

Click here to see the complete report.

At the end of November I was at a different sort of conference to the ones I normally get to attend.  This one, Design4learning was held at the OU in Milton Keynes, but was a more general education conference.  Described as “The Conference aims to advance the understanding and application of blended learning, design4learning and learning analytics ” Design4learning covered topics such as MOOCs, elearning, learning design and learning analytics.

There were a useful series of presentations at the conference and several of them are available from the conference website.   We’d put together a poster for the conference talking about the work we’ve started to do in the library on ‘library analytics’ – entitled ‘Learning Analytics – exploring the value of library data and it was good to talk to a few non-library people about the wealth of data that libraries capture and how that can contribute to the institutional picture of learning analyticPoster for Design4learning conferences.

Our poster covered some of the exploration that we’ve been doing, mainly with online resource usage from our EZProxy logfiles.  In some cases we’ve been able to join that data with demographic and other data from surveys to start to look in a very small way at patterns of online library use.

Design4learning conference poster v3

The poster also highlighted the range of data that libraries capture and the sorts of questions that could be asked and potentially answered.  It also flagged up the leading-edge work by projects such as Huddersfield’s Library Impact Data Project and the work of the Jisc Lamp project.

An interesting conference and an opportunity to talk with a different group of people about the potential of library data.

Photograph of office buildings at Holborn Circus

Holborn Circus – I was struck by the different angles of the buildings

Themes

For me two big themes came to mind after this year’s Future of Technology in Education Conference (FOTE). Firstly, around creativity, innovation and co-creation; and secondly about how fundamental data and analytics is becoming.

Creativity, innovation and co-creation

Several of the speakers talked about innovation and creativity.  Dave Coplin talked of the value of Minecraft and Project Spark and the need to create space for creativity, while Bethany Koby showed us examples of some of the maker kits ‘Technology Will Save Us’ are creating.

Others talked of ‘flipping the classroom’ and learning from students as well as co-creation and it was interesting in the Tech start-up pitchfest that a lot of the ideas were student-created tools, some working in the area of collaborative learning.

Data and analytics

The second big trend for me was about analytics and data.  I was particularly interested to see how many of the tools and apps being pitched at the conference had an underlying layer of analytics.  Evaloop which was working in the area of student feedback, Knodium – a space for student collaboration, Reframed.tv – offering interaction and sharing tools for video content, Unitu – an issues tracking tool and MyCQs – a learning tool, all seemed to make extensive use of data and analytics, while Fluency included teaching analytics skills.  It is interesting to see how many app developers have learnt the lessons of Amazon and Google of the value of the underlying data.

Final thoughts and what didn’t come up at the conference

I didn’t hear the acronymn MOOC at all – slightly surprising as it was certainly a big theme of last year’s conference.  Has the MOOC bubble passed? or is it just embedded into the mainstream of education?  Similarly Learning Analytics (as a specific theme).  Certainly analytics and data was mentioned (as I’ve noted above) but of Learning Analytics – not a mention, maybe it’s embedded into HE practice now?

Final thoughts on FOTE.  A different focus to previous years but still with some really good sessions and the usual parallel social media back-channels full of interesting conversations. Given that most people arrived with at least one mobile device, power sockets to recharge them were in rather short supply.

Friday in early October, so it must be time for ULCCs Future of Technology in Education at Senate House in London. I’ve been fortunate to be able to go several times, but it is always a scramble to get one of the scarce tickets when they are released on Eventbrite during August. They often seem to get released when I am away on holiday so I’ve sat in a variety of places and booked a ticket for FOTE.

The conference usually gives a good insight into the preoccupations of educational technologists at a particular time. In some ways I know I tend to use it as a bit of a checklist as much as being a conference that surfaces completely new things. So it is a case of looking at the trends and thinking about how that is relevant to us, what are we doing in that area, are there other things we need to be thinking about.

Current preoccupations in this area are certainly around practicalities, ethics etc of learning analytics. Interesting to see that Arkivum are here with a stand, that recognises a current preoccupation around Research Data Management.

I know I haven’t been blogging much since the Summer, mainly due to too many other things going on, a new library management system and discovery system implementation primarily. So I want to find a bit of time to reflect on FOTE and our new LMS.

IMG_0024.JPG

To Birmingham at the start of last week for the latest Jisc Library Analytics and Metrics Project (http://jisclamp.mimas.ac.uk/) Community Advisory and Planning group meeting.  This was a chance to catchup with both the latest progress and also the latest thinking about how this library analytics and metrics work will develop.

At a time when learning analytics is a hot topic it’s highly relevant to libraries to consider how they might respond to the challenges of learning analytics. [The 2014 Horizon report has learning analytics in the category of one year or less to adoption and describes it as ‘data analysis to inform decisions made on every tier of the education system, leveraging student data to deliver personalized learning, enable adaptive pedagogies and practices, and identify learning issues in time for them to be solved.’

LAMP is looking at library usage data of the sort that libraries collect routinely (loans, gate counts, eresource usage) but combines it with course, demographic and achievement data to allow libraries to start to be able to analyse and identify trends and themes from the data.

LAMP will build a tool to store and analyse data and is already working with some pilot institutions to design and fine-tune the tool.  We got to see some of the work so far and input into some of the wireframes and concepts, as well as hear about some of the plans for the next few months.

The day was also the chance to hear from the developers of a reference management tool called RefMe (www.refme.com).  This referencing tool is aimed at students who often struggle with the typically complex requirements of referencing styles and tools.  To hear about one-click referencing, with thousands of styles and with features to intergrate with MS Word, or to scan in a barcode and reference a book, was really good.  RefMe is available as an iOS or Android app and as a desktop version.  As someone who’s spent a fair amount of time wrestling with the complexities of referencing in projects that have tried to get simple referencing tools in front of students it is really good to see a start-up tackling this area.

There seems to have been a flurry of activity around reading system systems in recent weeks.  There’s the regular series of announcements of new customers for Talis Aspire which seems to clearly be the market-leader in this class of systems but there’s also been two particular examples of the integration of reading list systems into Moodle.

Firstly, the University of Sussex have been talking about their integration of Aspire into Moodle.  Slides from their presentation at ALRG are available from their repository.  There is also a really good video that they’ve put together that shows how the integration works in practice.  The video shows how easy it seems to be to add a section from a reading list directly into a moodle course.  It looks like a great example of integration that seems mostly to have been done without using the Aspire API.   One question I’d have about the integration is whether it automatically updates if there are changes made to the reading list, but it looks like a really neat development.

The other reading list development comes from EBSCO with their Curriculum Builder LMS plugin for EBSCO Discovery.   There’s also a video for this showing an integration with moodle.   This development makes use of the IMS Learning Tools Interoperability standard (LTI) to achieve the integration.   The approach mainly seems to be looked at from the Discovery system with features to let you find content in EBSCO Discovery and then add it to a Reading List, rather than being a separate reading list builder system.  It’s interesting to see the tool being looked at from the perspective of a course creator developing a reading list and useful to have features such as notes for each item on a list.  What looks to be different from the Sussex approach is that when you go to the reading list from within Moodle you are being taken out of Moodle and don’t see the list of resources in-line in Moodle.

There’s a developing resource bank of information on Helibtech at http://helibtech.com/Reading_Resource+lists that is useful to keep an eye on developments in this area.

Liblink admin screen The approach we’ve been taking is with a system called Liblink (which incidentally was shortlisted this year for the Times Higher Education Leadership and Management awards for Departmental ICT Initiative of the Year).  Liblink developed out of a system created to manage dynamic content for our main library website, for pages like http://www.open.ac.uk/library/library-resources/statistics-sources

The concept was to pull resources from a central database that was being updated regularly with data from systems such as SFX and the library catalogue.  This ensured that the links were managed and that there was a single record for each resource.  It then became obvious that the system, with some development, could replace a clutch of different resource list and linking systems that had been adopted over the years and could be used as our primary tool to manage linking to resources.  The tool is designed to allow us to push out lists of resources using RSS so they can be consumed by our Moodle VLE, but the tool also offers different formats such as html, plain text and RIS.

 

 

 

 

I picked up over the weekend via the No Shelf Required blog that EBSCO Discovery usage data is now being added into Plum Analytics.    EBSCO’s press release talks about providing “researchers with a much more comprehensive view of the overall impact of a particular article”.   Plum Analytics have fairly recently been taken over by EBSCO (and here) so it’s not so surprising that they’d be looking at how EBSCO’s data could enhance the metrics available through Plum Analytics.

It’s interesting to see the different uses that activity data in this sphere can be put to.  There are examples of it being used to drive recommendations, such as hot articles, or Automated Contextual Research Assistance. LAMP is talking of using activity data for benchmarking purposes.  So you’re starting to see a clutch of services-being driven by activity data just as the like’s of Amazon drive so much of what appears on their sales site by data driven by customer activity.

Beadnell wadersFor a few months now we’ve been running a project to look at student needs from library search.  The idea behind the research is that  we know that students find library search tools to be difficult compared with Google, we know it’s a pain point.  But actually we don’t know in very much detail what it is about those tools that students find difficult, what features they really want to see in a library search tool, and what they don’t want.   So we’ve set about trying to understand more about their needs.  In this blog post I’m going to run through the approach that we are taking.  (In a later blog post hopefully I can cover some detail of the things that we are learning.)

Approach
Our overall approach is that we want to work alongside students (something that we’ve done before in our personalisation research) in a model that draws a lot of inspiration from a co-design approach. Instead of building something and then usability testing it with students at the end we want to involve students at a much earlier stage in the process so for example they can help to draw up the functional specification.

We’re fortunate in having a pool of 350 or so students who agreed to work with us for a few months on a student panel.  That means that we can invite students from the panel to take part in research or give us feedback on a small number of different activities.  Students don’t have to take part in a particular activity but being part of the panel means that they are generally pre-disposed to working with us.  So we’re getting a really good take-up of our invitations – I think that so far we had more than 30 students involved at various stages, so it gives us a good breadth of opinions from students studying  different subjects, at different study levels and with different skills and knowledge.

We’ve split the research into three different stagesDiscovery research stages: an initial stage that looked at different search scenarios and different tools; a second stage that drew out of the first phase some general features and tried them on students, then a third phase that creates a new search tool and then undertakes an iterative cycle of develop, test, develop, test and so on.  The diagram shows the sequence of the process.

The overall direction of the project is that we should have a better idea of student needs to inform the decisions we make about Discovery, about the search tools we might build or how we might setup the tools we use.

As with any research activities with students we worked with our student ethics panel to design the testing sessions and get approval for the research to take place.

Phase One
We identified six typical scenarios – (finding an article from a reference,  finding a newspaper article from a reference, searching for information on a particular subject, searching for articles on a particular topic, finding an ebook from a reference and finding the Oxford English Dictionary).   All the scenarios were drawn from activities that we ask students to do, so used the actual subjects and references that they are asked to find.  We identified eight different search tools to use in the testing  – our existing One stop search, the mobile search interface that we created during the MACON project, a beta search tool that we have on our library website, four different versions of search tools from other Universities and Google Scholar.  The tools had a mix of tabbed search, radio buttons, bento-box-style search results, chosen to introduce students to different approaches to search.

Because we are a distance learning institution, students aren’t on campus, so we set up a series of online interviews.  We were fortunate to be able to make use of the usability labs at our Institute of Educational Technology and used Teamviewer software for the online interviews.  In total we ran 18 separate sessions, with each one testing 3 scenarios in 3 different tools.  This gave us a good range of different students testing different scenarios on each of the tools.

Sessions were recorded and notes were taken so we were able to pick up on specific comments and feedback.  We also measured success rate and time taken to complete the task.  The features that students used were also recorded.  The research allowed us to see which tools students found easiest to use, which features they liked and used, and which tools didn’t work for certain scenarios.

Phase two
For the second phase we chose to concentrate on testing very specific elements of the search experience.  So for example, we looked at radio buttons and drop-down lists, and whether they should be for Author/Title/Keyword or Article/Journal title/library catalogue.  We also looked at the layout of results screens, and the display of facets, to ask students how they wanted to see date facets presented for example.Discovery search mockup

We wanted to carry out this research with some very plain wireframes to test individual features without the distraction of website designs confusing the picture.  We tend to use a wireframing tool called Balsamiq to create our wireframes rapidly and we ran through another sequence of testing, this time with a total of 9 students in a series of online interviews, again using teamviewer.

By using wireframing you can quickly create several versions of a search box or results page and put them in front of users.  It’s a good way of being able to narrow down the features that it is worth taking through to full-scale prototyping.  It’s much quicker than coding the feature and once you’ve identified the features that you want your developer to build you have a ready-made wireframe to act as a guide for the layout and features that need to be created.

Phase three
The last phase is our prototype building phase and involves taking all the research and distilling that into a set of functional requirements for our project developer to create.  In some of our projects we’ve shared the specification with students so they can agree which features they wanted to see, but with this project we had a good idea from the first two phases what features they wanted to see in a baseline search tool, so missed out that stage.  We did, however, split the functional requirements into two stages: a baseline set of requirements for the search box and the results; and then a section to capture the iterative requirements that would arise during the prototyping stage.  We aimed for a rolling-cycle of build and test although in practice we’ve setup sessions for when students are available and then gone with the latest version each time – getting students to test and refine the features and identify new features to build and test.  New features get identified and added to what is essential a product backlog (in scrum methodology/terminology).  A weekly team meeting prioritises the task for the developer to work on and we go through a rolling cycle of develop/test.

Reflections on the process
The process seems to have worked quite well.  We’ve had really good engagement from students and really good feedback that is helping us to tease out what features we need to have in any library search tool.  We’re about half way through phase three and are aiming to finish off the research for the end of July.  Our aim is to get the search tool up as a beta tool on the library website as the next step so a wider group of users can trial it.

Catching up this week with some of the things from last week’s UKSG conference so I’ve been viewing some of the presentations that have been put up on YouTube at https://www.youtube.com/user/UKSGLIVE   There were a few that were of particular interest, especially those covering the Discovery strand.

The one that really got my attention was from Simone Kortekaas from Utrecht University talking about their decision to move away from discovery by shutting down their own in-house developed search system and now looking at shutting down their WebOPAC.  The presentation is embedded below

I found it interesting to work through the process that they went through, from realising that most users were starting their search elsewhere than the library (mainly Google Scholar) and so deciding to focus on making it easier for users to access library content through that route, instead of trying to focus on getting users to come to the library, to a library search tool.  It recognises that other players (i.e. the big search engines) may do discovery better than libraries.

I think I’d agree with the principle that libraries need to be where there users are.  So providing holdings to Google Scholar so the ‘find it at your library’ feature works and providing bookmarklet tools (e.g. http://www.open.ac.uk/library/new-tools/live-tools) to help users login are all important things to do.  But whilst Google and Bing now seem to be better at finding academic content they still lack Google Scholar’s ‘Library links’ feature and the ability to upload your holdings that would allow you to offer the same form of ‘Find it at the…’ feature in those spaces.  And with Google Scholar you always worry about how ‘mainstream’ it is considered.

It is an interesting direction to take as a strategic decision and means that you need to carefully monitor (as Utrecht do) trends in user activity and in particular changes in those major search engines to make sure that your resources can be found through major search engines.   One consequence is that users are largely being taken to publisher websites to access the content and we know that the variations in these sites can cause users some difficulty/confusion.  But it’s an approach to think about as we see where the trend for discovery takes us.

 

Twitter posts

Categories

Calendar

April 2015
M T W T F S S
« Mar    
 12345
6789101112
13141516171819
20212223242526
27282930  

Creative Commons License

Follow

Get every new post delivered to your Inbox.

Join 45 other followers