Photograph of office buildings at Holborn Circus

Holborn Circus – I was struck by the different angles of the buildings

Themes

For me two big themes came to mind after this year’s Future of Technology in Education Conference (FOTE). Firstly, around creativity, innovation and co-creation; and secondly about how fundamental data and analytics is becoming.

Creativity, innovation and co-creation

Several of the speakers talked about innovation and creativity.  Dave Coplin talked of the value of Minecraft and Project Spark and the need to create space for creativity, while Bethany Koby showed us examples of some of the maker kits ‘Technology Will Save Us’ are creating.

Others talked of ‘flipping the classroom’ and learning from students as well as co-creation and it was interesting in the Tech start-up pitchfest that a lot of the ideas were student-created tools, some working in the area of collaborative learning.

Data and analytics

The second big trend for me was about analytics and data.  I was particularly interested to see how many of the tools and apps being pitched at the conference had an underlying layer of analytics.  Evaloop which was working in the area of student feedback, Knodium – a space for student collaboration, Reframed.tv – offering interaction and sharing tools for video content, Unitu – an issues tracking tool and MyCQs – a learning tool, all seemed to make extensive use of data and analytics, while Fluency included teaching analytics skills.  It is interesting to see how many app developers have learnt the lessons of Amazon and Google of the value of the underlying data.

Final thoughts and what didn’t come up at the conference

I didn’t hear the acronymn MOOC at all – slightly surprising as it was certainly a big theme of last year’s conference.  Has the MOOC bubble passed? or is it just embedded into the mainstream of education?  Similarly Learning Analytics (as a specific theme).  Certainly analytics and data was mentioned (as I’ve noted above) but of Learning Analytics – not a mention, maybe it’s embedded into HE practice now?

Final thoughts on FOTE.  A different focus to previous years but still with some really good sessions and the usual parallel social media back-channels full of interesting conversations. Given that most people arrived with at least one mobile device, power sockets to recharge them were in rather short supply.

Friday in early October, so it must be time for ULCCs Future of Technology in Education at Senate House in London. I’ve been fortunate to be able to go several times, but it is always a scramble to get one of the scarce tickets when they are released on Eventbrite during August. They often seem to get released when I am away on holiday so I’ve sat in a variety of places and booked a ticket for FOTE.

The conference usually gives a good insight into the preoccupations of educational technologists at a particular time. In some ways I know I tend to use it as a bit of a checklist as much as being a conference that surfaces completely new things. So it is a case of looking at the trends and thinking about how that is relevant to us, what are we doing in that area, are there other things we need to be thinking about.

Current preoccupations in this area are certainly around practicalities, ethics etc of learning analytics. Interesting to see that Arkivum are here with a stand, that recognises a current preoccupation around Research Data Management.

I know I haven’t been blogging much since the Summer, mainly due to too many other things going on, a new library management system and discovery system implementation primarily. So I want to find a bit of time to reflect on FOTE and our new LMS.

IMG_0024.JPG

To Birmingham at the start of last week for the latest Jisc Library Analytics and Metrics Project (http://jisclamp.mimas.ac.uk/) Community Advisory and Planning group meeting.  This was a chance to catchup with both the latest progress and also the latest thinking about how this library analytics and metrics work will develop.

At a time when learning analytics is a hot topic it’s highly relevant to libraries to consider how they might respond to the challenges of learning analytics. [The 2014 Horizon report has learning analytics in the category of one year or less to adoption and describes it as ‘data analysis to inform decisions made on every tier of the education system, leveraging student data to deliver personalized learning, enable adaptive pedagogies and practices, and identify learning issues in time for them to be solved.’

LAMP is looking at library usage data of the sort that libraries collect routinely (loans, gate counts, eresource usage) but combines it with course, demographic and achievement data to allow libraries to start to be able to analyse and identify trends and themes from the data.

LAMP will build a tool to store and analyse data and is already working with some pilot institutions to design and fine-tune the tool.  We got to see some of the work so far and input into some of the wireframes and concepts, as well as hear about some of the plans for the next few months.

The day was also the chance to hear from the developers of a reference management tool called RefMe (www.refme.com).  This referencing tool is aimed at students who often struggle with the typically complex requirements of referencing styles and tools.  To hear about one-click referencing, with thousands of styles and with features to intergrate with MS Word, or to scan in a barcode and reference a book, was really good.  RefMe is available as an iOS or Android app and as a desktop version.  As someone who’s spent a fair amount of time wrestling with the complexities of referencing in projects that have tried to get simple referencing tools in front of students it is really good to see a start-up tackling this area.

There seems to have been a flurry of activity around reading system systems in recent weeks.  There’s the regular series of announcements of new customers for Talis Aspire which seems to clearly be the market-leader in this class of systems but there’s also been two particular examples of the integration of reading list systems into Moodle.

Firstly, the University of Sussex have been talking about their integration of Aspire into Moodle.  Slides from their presentation at ALRG are available from their repository.  There is also a really good video that they’ve put together that shows how the integration works in practice.  The video shows how easy it seems to be to add a section from a reading list directly into a moodle course.  It looks like a great example of integration that seems mostly to have been done without using the Aspire API.   One question I’d have about the integration is whether it automatically updates if there are changes made to the reading list, but it looks like a really neat development.

The other reading list development comes from EBSCO with their Curriculum Builder LMS plugin for EBSCO Discovery.   There’s also a video for this showing an integration with moodle.   This development makes use of the IMS Learning Tools Interoperability standard (LTI) to achieve the integration.   The approach mainly seems to be looked at from the Discovery system with features to let you find content in EBSCO Discovery and then add it to a Reading List, rather than being a separate reading list builder system.  It’s interesting to see the tool being looked at from the perspective of a course creator developing a reading list and useful to have features such as notes for each item on a list.  What looks to be different from the Sussex approach is that when you go to the reading list from within Moodle you are being taken out of Moodle and don’t see the list of resources in-line in Moodle.

There’s a developing resource bank of information on Helibtech at http://helibtech.com/Reading_Resource+lists that is useful to keep an eye on developments in this area.

Liblink admin screen The approach we’ve been taking is with a system called Liblink (which incidentally was shortlisted this year for the Times Higher Education Leadership and Management awards for Departmental ICT Initiative of the Year).  Liblink developed out of a system created to manage dynamic content for our main library website, for pages like http://www.open.ac.uk/library/library-resources/statistics-sources

The concept was to pull resources from a central database that was being updated regularly with data from systems such as SFX and the library catalogue.  This ensured that the links were managed and that there was a single record for each resource.  It then became obvious that the system, with some development, could replace a clutch of different resource list and linking systems that had been adopted over the years and could be used as our primary tool to manage linking to resources.  The tool is designed to allow us to push out lists of resources using RSS so they can be consumed by our Moodle VLE, but the tool also offers different formats such as html, plain text and RIS.

 

 

 

 

I picked up over the weekend via the No Shelf Required blog that EBSCO Discovery usage data is now being added into Plum Analytics.    EBSCO’s press release talks about providing “researchers with a much more comprehensive view of the overall impact of a particular article”.   Plum Analytics have fairly recently been taken over by EBSCO (and here) so it’s not so surprising that they’d be looking at how EBSCO’s data could enhance the metrics available through Plum Analytics.

It’s interesting to see the different uses that activity data in this sphere can be put to.  There are examples of it being used to drive recommendations, such as hot articles, or Automated Contextual Research Assistance. LAMP is talking of using activity data for benchmarking purposes.  So you’re starting to see a clutch of services-being driven by activity data just as the like’s of Amazon drive so much of what appears on their sales site by data driven by customer activity.

Beadnell wadersFor a few months now we’ve been running a project to look at student needs from library search.  The idea behind the research is that  we know that students find library search tools to be difficult compared with Google, we know it’s a pain point.  But actually we don’t know in very much detail what it is about those tools that students find difficult, what features they really want to see in a library search tool, and what they don’t want.   So we’ve set about trying to understand more about their needs.  In this blog post I’m going to run through the approach that we are taking.  (In a later blog post hopefully I can cover some detail of the things that we are learning.)

Approach
Our overall approach is that we want to work alongside students (something that we’ve done before in our personalisation research) in a model that draws a lot of inspiration from a co-design approach. Instead of building something and then usability testing it with students at the end we want to involve students at a much earlier stage in the process so for example they can help to draw up the functional specification.

We’re fortunate in having a pool of 350 or so students who agreed to work with us for a few months on a student panel.  That means that we can invite students from the panel to take part in research or give us feedback on a small number of different activities.  Students don’t have to take part in a particular activity but being part of the panel means that they are generally pre-disposed to working with us.  So we’re getting a really good take-up of our invitations – I think that so far we had more than 30 students involved at various stages, so it gives us a good breadth of opinions from students studying  different subjects, at different study levels and with different skills and knowledge.

We’ve split the research into three different stagesDiscovery research stages: an initial stage that looked at different search scenarios and different tools; a second stage that drew out of the first phase some general features and tried them on students, then a third phase that creates a new search tool and then undertakes an iterative cycle of develop, test, develop, test and so on.  The diagram shows the sequence of the process.

The overall direction of the project is that we should have a better idea of student needs to inform the decisions we make about Discovery, about the search tools we might build or how we might setup the tools we use.

As with any research activities with students we worked with our student ethics panel to design the testing sessions and get approval for the research to take place.

Phase One
We identified six typical scenarios – (finding an article from a reference,  finding a newspaper article from a reference, searching for information on a particular subject, searching for articles on a particular topic, finding an ebook from a reference and finding the Oxford English Dictionary).   All the scenarios were drawn from activities that we ask students to do, so used the actual subjects and references that they are asked to find.  We identified eight different search tools to use in the testing  – our existing One stop search, the mobile search interface that we created during the MACON project, a beta search tool that we have on our library website, four different versions of search tools from other Universities and Google Scholar.  The tools had a mix of tabbed search, radio buttons, bento-box-style search results, chosen to introduce students to different approaches to search.

Because we are a distance learning institution, students aren’t on campus, so we set up a series of online interviews.  We were fortunate to be able to make use of the usability labs at our Institute of Educational Technology and used Teamviewer software for the online interviews.  In total we ran 18 separate sessions, with each one testing 3 scenarios in 3 different tools.  This gave us a good range of different students testing different scenarios on each of the tools.

Sessions were recorded and notes were taken so we were able to pick up on specific comments and feedback.  We also measured success rate and time taken to complete the task.  The features that students used were also recorded.  The research allowed us to see which tools students found easiest to use, which features they liked and used, and which tools didn’t work for certain scenarios.

Phase two
For the second phase we chose to concentrate on testing very specific elements of the search experience.  So for example, we looked at radio buttons and drop-down lists, and whether they should be for Author/Title/Keyword or Article/Journal title/library catalogue.  We also looked at the layout of results screens, and the display of facets, to ask students how they wanted to see date facets presented for example.Discovery search mockup

We wanted to carry out this research with some very plain wireframes to test individual features without the distraction of website designs confusing the picture.  We tend to use a wireframing tool called Balsamiq to create our wireframes rapidly and we ran through another sequence of testing, this time with a total of 9 students in a series of online interviews, again using teamviewer.

By using wireframing you can quickly create several versions of a search box or results page and put them in front of users.  It’s a good way of being able to narrow down the features that it is worth taking through to full-scale prototyping.  It’s much quicker than coding the feature and once you’ve identified the features that you want your developer to build you have a ready-made wireframe to act as a guide for the layout and features that need to be created.

Phase three
The last phase is our prototype building phase and involves taking all the research and distilling that into a set of functional requirements for our project developer to create.  In some of our projects we’ve shared the specification with students so they can agree which features they wanted to see, but with this project we had a good idea from the first two phases what features they wanted to see in a baseline search tool, so missed out that stage.  We did, however, split the functional requirements into two stages: a baseline set of requirements for the search box and the results; and then a section to capture the iterative requirements that would arise during the prototyping stage.  We aimed for a rolling-cycle of build and test although in practice we’ve setup sessions for when students are available and then gone with the latest version each time – getting students to test and refine the features and identify new features to build and test.  New features get identified and added to what is essential a product backlog (in scrum methodology/terminology).  A weekly team meeting prioritises the task for the developer to work on and we go through a rolling cycle of develop/test.

Reflections on the process
The process seems to have worked quite well.  We’ve had really good engagement from students and really good feedback that is helping us to tease out what features we need to have in any library search tool.  We’re about half way through phase three and are aiming to finish off the research for the end of July.  Our aim is to get the search tool up as a beta tool on the library website as the next step so a wider group of users can trial it.

Catching up this week with some of the things from last week’s UKSG conference so I’ve been viewing some of the presentations that have been put up on YouTube at https://www.youtube.com/user/UKSGLIVE   There were a few that were of particular interest, especially those covering the Discovery strand.

The one that really got my attention was from Simone Kortekaas from Utrecht University talking about their decision to move away from discovery by shutting down their own in-house developed search system and now looking at shutting down their WebOPAC.  The presentation is embedded below

I found it interesting to work through the process that they went through, from realising that most users were starting their search elsewhere than the library (mainly Google Scholar) and so deciding to focus on making it easier for users to access library content through that route, instead of trying to focus on getting users to come to the library, to a library search tool.  It recognises that other players (i.e. the big search engines) may do discovery better than libraries.

I think I’d agree with the principle that libraries need to be where there users are.  So providing holdings to Google Scholar so the ‘find it at your library’ feature works and providing bookmarklet tools (e.g. http://www.open.ac.uk/library/new-tools/live-tools) to help users login are all important things to do.  But whilst Google and Bing now seem to be better at finding academic content they still lack Google Scholar’s ‘Library links’ feature and the ability to upload your holdings that would allow you to offer the same form of ‘Find it at the…’ feature in those spaces.  And with Google Scholar you always worry about how ‘mainstream’ it is considered.

It is an interesting direction to take as a strategic decision and means that you need to carefully monitor (as Utrecht do) trends in user activity and in particular changes in those major search engines to make sure that your resources can be found through major search engines.   One consequence is that users are largely being taken to publisher websites to access the content and we know that the variations in these sites can cause users some difficulty/confusion.  But it’s an approach to think about as we see where the trend for discovery takes us.

 

For a little while I’ve been trying to find some ways of characterising the different generations or ages of library ‘search’ systems.  By library ‘search’ I’ve been thinking in terms of tools to find resources in libraries (search as a locating tool) as well as the more recent trend (athough online databases have been with us for a while) of search as a tool to find information. Library search ages

I wanted something that I could use as a comparison that picked up on some of the features of library search but compared them with some other domain that was reasonably well known.  Then I was listening to the radio the other day and there was some mention that it was the anniversary of the 45rpm single, and that made me wonder whether I could compare the generations of library search against the changes in formats in the music industry.

My attempt at trying to map them across is illustrated here.  There are some connections – both discovery systems and the likes of spotify streaming music systems are both cloud hosted.  Early printed music scores and the printed library catalogue such as the original British Museum library catalogue.  I’m not so sure about some of the stages in between though, certainly the direction for both has been to make library/music content more accessible.  But it seemed  like a worthwhile thing to think about and try it out. Maybe it works, maybe not.

 

We’ve been using Trello (http://trello.com) as a tool to help us manage the lists of tasks in the digital library/digital archive project that we’ve been running.  After looking at some of our existing tools (such as Mantis Bug Tracker for example) the team decided that they didn’t really want the detailed tracking features and didn’t feel that our standard project management tools (MS Project and the One Page Project Manager, or Outlook tasks) were quite what we needed to keep track of what is essentially a ‘product backlog‘, a list of requirements that need to be developed for the digital archive system.

Trello’s simplicity Trello desktop screenshotmakes it easy to add and organise a list of tasks and break them down into categories, with colour-coding and the ability to drag tasks around from one stream to another.  Being able to share the board across the team and assign members to the task is good.  You can also set due dates and attach files, which we’ve found useful to use to attach design and wireframe illustrations.  You can set up as many different boards as you need to so can breakdown your tasks however you want to.  The boards scroll left and right so you can go to as many columns as you need to.

We’ve been using it to group together priority tasks into a list so the team know which tasks to concentrate on, and when the tasks are done the team member can update the task message so each task can be checked and cleared off the list.

Trello ipad screenshot We’re mainly using Trello on the desktop straight from the website, although there is also an ipad app that seems to work well.  For a fairly small team with just a single developer Trello seems to work quite well.  It’s simple and easy to use and doesn’t take a lot of effort to keep up to date, it’s a practical and useful tool.   If you had a larger project you might want to use more sophisticated tools that have some ability to track progress and effort and produce burndown charts for example, but as a simple way of tracking a list of tasks to be worked on, it’s a useful project tool.

 

 

 

Highland cow and calf at InversnaidI’m always looking to find out about the tools and techniques that people are using to improve their websites, and particularly how they go about testing the user experience (or UX) to make sure that they can make steady improvements in their site.

So I’m a regular follower of some of the work that is going on in academic libraries in the US (e.g. Scott Young talking about A/B testing and experiments at Montana , and Matt Reidsma talking about Holistic UX).    It was particularly useful to find out about the process that led to the three headings on the home page of Montana State University library, and the stages that they went through before they settled on Find, Request and Services.Montana State University Library website homepage  A step-by-step description showing the tools and techniques is a really valuable demonstration of how they went about the process and how they made their decision.  It is interesting to me how frequently libraries seem not to pick the right words to describe their services, and don’t pick words that make sense to their users.  But it’s really good to see an approach that largely gets users to decide on what works by testing what works, rather than asking users what they prefer.

Something else that I came across the other week was the term ‘guerilla testing’ applied to testing the usability of websites (I think that probably came from the session on ‘guerilla research’ that Martin Weller and Tony Hirst ran the other week that I caught up with via their blog posts/recording).  That led on to ‘Guerilla testing‘ on the Government Service Design Manual (there’s some sense of slight irony for me about guerilla testing being documented – in some detail – in a design manual) – but the way it talks through the process, its strengths and weaknesses is really useful and it made me think about the contrast between that approach and the fairly careful and deliberate approach that we’ve been talking with our work over  the last couple of months.  Some things to think about.

Reflections on our approach
It’s good to get an illustration from Montana of the value of the A/B testing approach.  It’s a well-evidenced and standard approach to web usability but I know that it is something that we’ve found difficult to use in a live environment as it makes our helpdesk people anxious that they aren’t clear what version of the website customers might be seeing.  So we’ve tended to use step by step iterations rather than straightforward A/B testing.  But something I think to revisit.

The piece of work that we’re concentrating on at the moment is to look at student needs from library search.  We know it’s a pain point for students, we know it’s not ‘like Google’ and isn’t as intuitive as they feel it should be.  So we’re trying to gain a better understanding of what we could do to make it a better experience (and what we shouldn’t do).  So we’re working with a panel of students who want to work with the library to help create better services.

The first round tested half a dozen typical library resource search scenarios against eight different library search tools (some from us and some from elsewhere) with around twenty different users.   We did all our testing as remote 1:1 sessions using Teamviewer software (although you could probably use Skype or a number of alternatives) and were able to record the sessions and have observers/note takers.  We’ve assessed the success rate for each scenario against each tool and also measured the average time it took to complete each task with each tool (or the time before people gave up).  These are giving us a good idea of what works and what doesn’t.

For the second round a series of wireframes were created using Balsamiq to test different variants of search boxes and results pages.  So we ran a further set of tests again with the panel and again remotely.  We’ve now got a pretty good idea of some of the things that look like they will work so have started to prototype a real search tool.  We’re now going to be doing a series of iterative development cycles, testing tools with students and then refining them.  That should greatly improve our understanding of what students want from library search and allow us to experiment with how we can build the features they want.

Twitter posts

Categories

Calendar

October 2014
M T W T F S S
« Aug    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Creative Commons License

Follow

Get every new post delivered to your Inbox.

Join 44 other followers