You are currently browsing the category archive for the ‘technology’ category.

At the end of November I was at a different sort of conference to the ones I normally get to attend.  This one, Design4learning was held at the OU in Milton Keynes, but was a more general education conference.  Described as “The Conference aims to advance the understanding and application of blended learning, design4learning and learning analytics ” Design4learning covered topics such as MOOCs, elearning, learning design and learning analytics.

There were a useful series of presentations at the conference and several of them are available from the conference website.   We’d put together a poster for the conference talking about the work we’ve started to do in the library on ‘library analytics’ – entitled ‘Learning Analytics – exploring the value of library data and it was good to talk to a few non-library people about the wealth of data that libraries capture and how that can contribute to the institutional picture of learning analyticPoster for Design4learning conferences.

Our poster covered some of the exploration that we’ve been doing, mainly with online resource usage from our EZProxy logfiles.  In some cases we’ve been able to join that data with demographic and other data from surveys to start to look in a very small way at patterns of online library use.

Design4learning conference poster v3

The poster also highlighted the range of data that libraries capture and the sorts of questions that could be asked and potentially answered.  It also flagged up the leading-edge work by projects such as Huddersfield’s Library Impact Data Project and the work of the Jisc Lamp project.

An interesting conference and an opportunity to talk with a different group of people about the potential of library data.

Photograph of office buildings at Holborn Circus

Holborn Circus – I was struck by the different angles of the buildings


For me two big themes came to mind after this year’s Future of Technology in Education Conference (FOTE). Firstly, around creativity, innovation and co-creation; and secondly about how fundamental data and analytics is becoming.

Creativity, innovation and co-creation

Several of the speakers talked about innovation and creativity.  Dave Coplin talked of the value of Minecraft and Project Spark and the need to create space for creativity, while Bethany Koby showed us examples of some of the maker kits ‘Technology Will Save Us’ are creating.

Others talked of ‘flipping the classroom’ and learning from students as well as co-creation and it was interesting in the Tech start-up pitchfest that a lot of the ideas were student-created tools, some working in the area of collaborative learning.

Data and analytics

The second big trend for me was about analytics and data.  I was particularly interested to see how many of the tools and apps being pitched at the conference had an underlying layer of analytics.  Evaloop which was working in the area of student feedback, Knodium – a space for student collaboration, – offering interaction and sharing tools for video content, Unitu – an issues tracking tool and MyCQs – a learning tool, all seemed to make extensive use of data and analytics, while Fluency included teaching analytics skills.  It is interesting to see how many app developers have learnt the lessons of Amazon and Google of the value of the underlying data.

Final thoughts and what didn’t come up at the conference

I didn’t hear the acronymn MOOC at all – slightly surprising as it was certainly a big theme of last year’s conference.  Has the MOOC bubble passed? or is it just embedded into the mainstream of education?  Similarly Learning Analytics (as a specific theme).  Certainly analytics and data was mentioned (as I’ve noted above) but of Learning Analytics – not a mention, maybe it’s embedded into HE practice now?

Final thoughts on FOTE.  A different focus to previous years but still with some really good sessions and the usual parallel social media back-channels full of interesting conversations. Given that most people arrived with at least one mobile device, power sockets to recharge them were in rather short supply.

To Birmingham at the start of last week for the latest Jisc Library Analytics and Metrics Project ( Community Advisory and Planning group meeting.  This was a chance to catchup with both the latest progress and also the latest thinking about how this library analytics and metrics work will develop.

At a time when learning analytics is a hot topic it’s highly relevant to libraries to consider how they might respond to the challenges of learning analytics. [The 2014 Horizon report has learning analytics in the category of one year or less to adoption and describes it as ‘data analysis to inform decisions made on every tier of the education system, leveraging student data to deliver personalized learning, enable adaptive pedagogies and practices, and identify learning issues in time for them to be solved.’

LAMP is looking at library usage data of the sort that libraries collect routinely (loans, gate counts, eresource usage) but combines it with course, demographic and achievement data to allow libraries to start to be able to analyse and identify trends and themes from the data.

LAMP will build a tool to store and analyse data and is already working with some pilot institutions to design and fine-tune the tool.  We got to see some of the work so far and input into some of the wireframes and concepts, as well as hear about some of the plans for the next few months.

The day was also the chance to hear from the developers of a reference management tool called RefMe (  This referencing tool is aimed at students who often struggle with the typically complex requirements of referencing styles and tools.  To hear about one-click referencing, with thousands of styles and with features to intergrate with MS Word, or to scan in a barcode and reference a book, was really good.  RefMe is available as an iOS or Android app and as a desktop version.  As someone who’s spent a fair amount of time wrestling with the complexities of referencing in projects that have tried to get simple referencing tools in front of students it is really good to see a start-up tackling this area.

We’ve been using Trello ( as a tool to help us manage the lists of tasks in the digital library/digital archive project that we’ve been running.  After looking at some of our existing tools (such as Mantis Bug Tracker for example) the team decided that they didn’t really want the detailed tracking features and didn’t feel that our standard project management tools (MS Project and the One Page Project Manager, or Outlook tasks) were quite what we needed to keep track of what is essentially a ‘product backlog‘, a list of requirements that need to be developed for the digital archive system.

Trello’s simplicity Trello desktop screenshotmakes it easy to add and organise a list of tasks and break them down into categories, with colour-coding and the ability to drag tasks around from one stream to another.  Being able to share the board across the team and assign members to the task is good.  You can also set due dates and attach files, which we’ve found useful to use to attach design and wireframe illustrations.  You can set up as many different boards as you need to so can breakdown your tasks however you want to.  The boards scroll left and right so you can go to as many columns as you need to.

We’ve been using it to group together priority tasks into a list so the team know which tasks to concentrate on, and when the tasks are done the team member can update the task message so each task can be checked and cleared off the list.

Trello ipad screenshot We’re mainly using Trello on the desktop straight from the website, although there is also an ipad app that seems to work well.  For a fairly small team with just a single developer Trello seems to work quite well.  It’s simple and easy to use and doesn’t take a lot of effort to keep up to date, it’s a practical and useful tool.   If you had a larger project you might want to use more sophisticated tools that have some ability to track progress and effort and produce burndown charts for example, but as a simple way of tracking a list of tasks to be worked on, it’s a useful project tool.




ipad screenshotGreat though touch-screens on tablets and smartphones are, one of the drawbacks with them that I’ve found is that the experience of typing on them isn’t a particularly nice experience.  It’s all too easy to type the wrong character and it’s one of the things that is always frustrating about typing notes on an ipad, how much time you have to spend correcting what you’ve typed.   So I was really interested to see a tweet today about a technology that has been around for a litle while that makes raised buttons appear from the surface of a touch screen when needed.  Checking out the article from Business Insider and then browsing around for some other information about the technology, including this Techcrunch blog post and the website for Tactus Technology, the company developing this idea, and it looks like a really interesting idea that could make typing on a tablet a much nicer experience and avoid having to cart around a chunk of peripherals such as add on keyboards.

Essentially the technology seems to consist of a fluid layer that can generate raised buttons as and when needed. It’s quite intriguing to see buttons suddenly morph (?) out of a flat screen.  But what you get is a small raised button that looks like it will be easier to touch and reduce the chance of mistaken keystrokes.  I’d be intrigued to find out what the buttons actually ‘feel’ like but they look like being a really useful feature.

Ideally this technology would be integrated into the design of the smartphone or tablet and driven by the software although I see that they’ve also worked on an interim approach using a case.  It will be really interesting to see how they get on with getting this technology integrated into mainstream devices and when we might see the first production examples of the technology.  It also strikes me to wonder whether the fine-definition of the technology would let you develop a tablet that could display braille writing.

Photgraph of RobiniBeacon is an interesting piece of location-based Apple technology and I started wondering about how useful it might be in a library context.  Essentially (as this article from the Guardian describes) it is being sold as a micro-broadcast technology where transmitters can communicate with nearby smartphones.  So there have been applications that have been proposed to allow shops to send you messages about special offers for example as you walk past, a sort of advertising sandwich-board I suppose.

But that technology might be interesting in a library context.  You could see it directing you to where there is a public PC that is available for use, or telling you when you enter a library that something you have reserved is available for collection (or reminding you of things that are due for return).  You could envisage it flagging up new resources as you walk round different sections in a library, or maybe tell you about library events related to that section.  Browsing the fiction, maybe you might be interested in knowing about the ebooks that are available, or knowing about the book group that meets?

I wonder about how it might relate to the RFID tags in many libraries now and whether you could combine the technologies to use your phone to direct you round the library to find the book you wanted, and maybe to borrow it without ever needing to go near the self-service machines or a library checkout desk?

Photograph of sparrows in a baarn doorway It was Lorcan Dempsey who I believe coined the term, ‘Full library discovery’ in a blog post last year. As a stage beyond ‘full collection discovery’, ‘full library discovery’ added in results drawn from LibGuides or library websites, alongside resource material from collections.  So for example a search for psychology might include psychology resources, as well as help materials for those pyschology resources and contact details about the subject librarian that covers psychology.  Stanford and Michigan are two examples of that approach, combining lists of relevant resources with website results.

Princeton’s new All search feature offers a similar approach, discussed in detail on their FAQ.  This combines results from their Books+, Articles+, Databases, Library Website and Library Guides into a ‘bento box’ style results display. Princeton all searchPrinceton’s approach is similar to the search from North Carolina State University who I think were about the first to come up with this style.

Although in most of these cases I suspect that the underlying systems are quite different the approach is very similar.  It has the advantage of being a ‘loosely-coupled’ approach where your search results page is drawn together in a ‘federated’ search method by pushing your search terms to several different systems, making use of APIs and then displaying the results in a dashboard-style layout.  It has the advantage that changes to any of the underlying systems can be accommodated relatively easily, yet the display to your users stays consistent.

For me the disadvantages for this are in the lack of any overriding relevancy ranking for the material and that it perpetuates the ‘silo’ing’ of content to an extent (Books, Articles, Databases etc) which is driven largely by the underlying silos of systems that we rely on to manage that content.  I’ve never been entirely convinced that users understand the distinction about what a ‘database’ might be.  But the approach is probably as good as we can get until we get to truly unified resource management and more control over relevancy ranking.

Going beyond ‘full library discovery’
But ‘full library discovery’ is still very much a ‘passive’ search tool, and by that I mean that it isn’t personalised or ‘active’.  At some stage to use those resources a student will be logging in to that system and that opens up an important question for me.  Once you know who the user is, ‘how far should you go to provide a personalised search experience?’.  You know who they are, so you could provide recommendations based on what other students studying their course have looked at (or borrowed), you might even stray into ‘learning analytics’ territory and know what the resources were that the highest achieving students looked at.

You might know what resources are on the reading list for the course that student is studying – so do you search those resources first and offer those up as they might be most relevant?  You might even know what stage a student has got to in their studies and know what assignment they have to do, and what resources they need to be looking at.  Do you ‘push’ those to a student?

How far do you go in assembling a profile of what might be ‘recommended’ for a course, module or assignment, what other students on the cohort might be looking at, or looked at the last time this course ran?  Do you look at students previous search behaviour?  How much of this might you do to build and then search some form of ‘knowledge base’ with the aim of surfacing material that is likely to be of most relevance to a student.  Search for psychology on NCSU’s Search All search box gives you the top three articles out of 2,543,911 articles in Summon, and likely behaviour is not to look much beyond the first page of results.  So should we be making sure that they are likely to be the most relevant ones?

But, then there’s serendipity, there’s finding the different things that you haven’t looked for before, or read before, because they are new or different.  One of the issues with recommendations is the tendancy for them to be circular, ‘What gets recommended gets read’ to corrupt the performance indicator mantra.  So how far do you go?  ‘Mind reading search’ anyone?

Picture of flowerOK, so it’s the time of year to reflect back on the last year and look forward to the new year.

I’ve definitely blogged less (24 posts in 2013 compared with 37 in 2012 and 50 in 2011), [mind you the ‘death of blogging’ has been announced, and here and there seem to be fewer library bloggers than in the past – so maybe blogging less is just reflecting a general trend].  Comments about blogging are suggesting that tumblr, twitter or snapchat are maybe taking people’s attention (both bloggers and readers) away from blogs.  But I’m not ‘publishing’ through other channels particularly, other than occasional tweets, so that isn’t the reason for me to blog less.  There has been a lot going on but that’s probably not greatly different from previous years.  I think I’ve probably been to less conferences and seminars, particularly internal seminars, so that has been one area where I’ve not had as much to blog about.

To blog about something or not to blog about it
I’ve been more conscious of not blogging about some things that in previous years I probably would have blogged about.  I don’t think I blogged about the Future of Technology in Education conference this year, although I have done in the past.  Not particularly because it wasn’t interesting because it was, but perhaps a sense of I’ve blogged about it before and might just be repeating myself.   With the exception of posts about website search and activity data I’ve not blogged so much about some of the work that I’ve been doing.  So I’ve blogged very little about the digital library work although it (and the STELLAR project) were a big part of some of the interesting stuff that has been going on.

Thinking about the year ahead
I’ve never been someone that sets out predictions or new year resolutions.  I’ve never been convinced that you can actually predict (and plan) too far ahead in detail without too many variables fundamentally changing those plans.  There’s a quote attributed to various people along the lines that ‘no plan survives contact with the enemy’ and I’d agree with that sentiment.  However much we plan we are always working with an imperfect view of the world.  Circumstances change and priorities vary and you have to adapt to that.   Thinking back to FOTE 2013 it was certainly interesting to hear BT’s futureologist Nicola Millard describe her main interest as being the near future and of being more a ‘soon-ologist’ than a futureologist.

What interests (intrigues perhaps) me more is less around planning but more around ‘shaping’ a future, so more change management than project management I suppose.  But I think it is more than that, how do those people who carve out a new ‘reality’ go about making that change happen.  Maybe it is about realising a ‘vision’ but assembling a ‘vision’ is very much the easy part of the process.  Getting buy-in to a vision does seem to be something that we struggle with in a library setting.

On with 2014
Change management is high on the list for this year.  We’ve done a certain amount of the ‘visioning’ to get buy-in to funding a change project.  So this year we’ve work to do to procure a complete suite of new library systems (the first time I think here for 12 years or so), in a project called ‘Library Futures’ that also includes some research into student needs from library search and the construction of a ‘digital skills passport’.  I’ve also got continuing work on digital libraries/archives as we move that work from development to live, alongside work with activity data, our library website and particularly work with integrating library stuff much more into a better student experience.  So hopefully some interesting things to blog about.  And hopefully a few new pictures to brighten up the blog (starting with a nice flower picture from Craster in the summer).

Challenge of the moment was to put together a video and although I’ve played around with videos I haven’t particularly had too much experience of having to create and edit a video.   So I thought it would be interesting to blog about the tools I’ve been using, how I’ve found them and what some of the challenges were.  One of the principles was to see what I could do with the standard office-type tools I had, what was available free on the web or downloadable.

Assembling the stuff
The starting point was to pull together some content from a variety of different sources, mainly videos and  images.  For the videos one of the challenges was to get hold of copies of the videos so they could be edited locally.  One of the useful tools was KeepVid, this tool lets you download streaming video from some locations such as KeepVid screenshotYouTube.  The first few times I tried to use it I ended up downloading iLivid until I worked out to ignore the big coloured buttons as they were adverts that presumably are paying for the tool to be free.  But once I’d worked out that all I needed to do was to paste in the URL and click the Grey download it worked really well and gives you a choice of several video formats.  I chose MP4 and saved the video locally.  (I don’t know why but Firefox always annoys me in not letting me choose where to save a downloaded file and just putting it in the download folder where I have to move it).

KeepVid supports quite a range of streaming sites.  The FAQ lists


Which websites does KeepVid support?
Dailymotion, 4shared, 5min, 9you,, Aniboom,, Break,, Clipser,, CollegeHumor, Cracked, Current,,, eHow, eBaumsWorld, Ensonhaber, Facebook, Flickr, Flukiest, FunnyJunk, FunnyOrDie,, Metacafe, MySpace, Ning, Photobucket, RuTube, SoundCloud, Stagevu, TED, Tudou, TwitVid, VBOX7, videobb,, Veoh, Vimeo,

Some videos couldn’t be downloaded using KeepVid so after looking around at options I went with using Camtasia Recorder 8 from TechSmith.  This has a 30 day free trial which is enough time to play around with it and test it out.  Camtasia is screen recording software designed for screen capture and quite commonly used for creating learning activities so it was something I’d come across before.  Camtasia screenshot One of the things that Camtasia allows you to do is to capture activites on a screen, typically you might record an activity of navigating around a website.  But for my purposes I’ve used it to capture a video playing on the screen.  Camtasia lets you select an area of the screen and also adjust the sound levels.  [Note: The sound levels are  really important when it comes to editing your final video.]

I’ve also made use of Powerpoint as a means of creating some images to use between videos and images to try to tell a story and set some context from one sequence to the next.

Having created the slides I’ve then just used Jing (again from TechSmith) to screencapture each slide to turn them into .PNG image files to use in the video.  I could have used any of a number of different tools, but Jing is one I use all the time.  It just sits at the top of my desktop and I regularly use it whenever I want to grab bits of an image and just save it to use elsewhere.  So I use it all theJing screenshot time for images for this blog for example.  It’s simple to just select an area of the screen and capture that as an image.  Often I’ll use Jing in combination with something like to select an image and then resize or crop it.  Pretty basic stuff but it just gives me enough flexibility to tweak things without going to a more sophisticated and complicated tool.

Editing the stuff
Camtasia Studio screenshot
Having assembled much of the raw materials and worked out a rough storyboard based on the original idea for the video I’ve then gone back to Camtasia, this time Camtasia Studio 8 (with my 30 day free trial) to edit the video. One of the features of Camtasia Studio is that you can use it to edit  your video extracts to just the clips that you want. It’s a pretty powerful tool and I’m only scratching the surface of its functionality.  In retrospect and looking at the features I’m pretty sure that I could have actually used Camtasia Studio for much of  the video editing stage in its entirety.

But I’d already started playing with another tool to build my video.  I’d started playing with Windows Movie Maker as the tool to build the video.  Windows Movie Maker is available as a free download as part of Windows Essentials 2012 from Microsoft for Windows 7 and Windows 8.  Windows Movie Maker screenshotI’d not used it at all before and I must admit that it was pretty straightforward to assemble a collection of clips and images together into a video to tell a story.  It lets you edit your clips together and shows them as a succession of elements, very much like a film.  It’s even got a little skeuomorphic trick of showing the top and bottom film guide holes at the start and end of each of your clips. Windows Movie Maker film image (Incidentally I notice this week news that skeuomorphism is out for Apple’s new iOS 7). It’s interesting that Windows Movie Maker also uses a film icon for each of your projects.

It’s quite a simple to use but a surprisingly powerful tool.  It’s easy to add videos and images, you can add sound tracks or music and fine tune the length that still images will be shown on screen.  You can use a number of transition tools to manage the changeover from one screen to the next and achieve some almost cinematic effects (with the temptation to do far too much).  You can also adjust your music track to fit your video and fade in and out.  There are also scrolling titles and credits features where you can determine how your closing credits will appear (or replicate the sci-fi big block of text scrolling up the screen and disappearing off into the distance … I resisted the temptation!).

There is also quite a good set of tools for uploading to several video sharing sites or packaging your video for use, although if you just want to save as a video file then MP4 or Windows Media file formats seem to be the main options.  But overall really impressed with the ease of use of the tool.

Easier than I thought it might be to get something that looks OK.  Has a sequence of video and still images cut together with reasonable transitions, start and end titles and a mix of overlaying music.  Impressed at the range of tools that are out there that are essentially free (once you’ve invested in a reasonable spec PC, Windows 7, MS Office, fast internet access – so there’s a barrier at that level).  Camtasia is something to follow up and learn more about.  You can go quite a long way with tools that are easily avaialble without spending a lot of money on a high-specification tool.

Saving the file to create your final output takes some time, even on a pretty good specification laptop.  And file sizes are large (150mb for something around 8 minutes).  But maybe 150mb isn’t a large file now when 1tb external drives are pretty cheap.  Editing the audio and particularly getting the sound levels right is something that is quite challenging.  Where you’ve a mix of videos with the sound tracks already on them then it isn’t so straightforward to get everything at the right level so a better sound-editing tool might have been good.  But how easy it would be to extract all the sound and re-record it I’m not sure.  Something else to learn.

But a good learning opportunity and interesting to work through what you can do.

Jisc elevator website screenshotIt was great to see this week that the latest opportunity on the Jisc Elevator website is one for students to pitch ideas about new technology ideas.  That’s really nice to see something that involves students in coming up with ideas and backing it up with a small amount of money to kickstart things.

Using students as co-designers for library services and in particularly in relation to websites and technology is something that I’m finding more and more compelling.  A lot of the credit for that goes to Matthew Reidsma from Grand Valley State University in US, whose blog ‘Good for whom?‘ is pretty much essential reading if you’re interested in usability and improving the user experience.   I’m starting to see that getting students involved in co-designing services is the next logical step on from usability testing.  So instead of a process where you design a system and then test it on users, you involve them from the start, by asking them what they need, maybe then getting them to feedback on solution designs and specifications and then going through the design process of prototyping, testing and iterating, by getting them to look at every stage.  Something that an agile development methodology particularly lends itself to.  Examples where people have started to employ students on the staff to help with getting that student ‘voice’ are also starting to appear.

There are some examples of fairly recent projects where Universities have been getting students (and others outside the institution) involved in designing services, so for example the Collaborate project at Exeter that looked at using students and employers to design ’employability focussed assessments’.  There is also Leeds Metropolitan with their PC3 project on the personalised curriculum and Manchester Metropolitan’s ‘Supporting Responsive Curricula’ project.    And you can add to that list of examples the Kritikos project at Liverpool that I blogged about recently.

For us, with our focus on websites and improving the user experience we’ve been working with a group of students to help us with designing some tools for a more personalised library experience.  I blogged a bit about it earlier in the year.   We’re now well into that programme of work and have put together a guest blog post for Jisc’s LMS Change project blog ‘Personalisation at the Open University’. Thanks to Ben Showers from Jisc and Helen Harrop from the LMS Change project for getting that published.  Credit for the work on this (and the text for the blog post) should go to my colleagues: Anne Gambles, Kirsty Baker and Keren Mills.  Having identified some key features to build we are well into getting the specification for the work finalised and start building the first few features soon.   It’s been an interesting first foray into working with students as co-designers and one I think has major potential for how we do things in the future.

Twitter posts



January 2021

Creative Commons License