You are currently browsing the category archive for the ‘educational technology’ category.

At the end of November I was at a different sort of conference to the ones I normally get to attend.  This one, Design4learning was held at the OU in Milton Keynes, but was a more general education conference.  Described as “The Conference aims to advance the understanding and application of blended learning, design4learning and learning analytics ” Design4learning covered topics such as MOOCs, elearning, learning design and learning analytics.

There were a useful series of presentations at the conference and several of them are available from the conference website.   We’d put together a poster for the conference talking about the work we’ve started to do in the library on ‘library analytics’ – entitled ‘Learning Analytics – exploring the value of library data and it was good to talk to a few non-library people about the wealth of data that libraries capture and how that can contribute to the institutional picture of learning analyticPoster for Design4learning conferences.

Our poster covered some of the exploration that we’ve been doing, mainly with online resource usage from our EZProxy logfiles.  In some cases we’ve been able to join that data with demographic and other data from surveys to start to look in a very small way at patterns of online library use.

Design4learning conference poster v3

The poster also highlighted the range of data that libraries capture and the sorts of questions that could be asked and potentially answered.  It also flagged up the leading-edge work by projects such as Huddersfield’s Library Impact Data Project and the work of the Jisc Lamp project.

An interesting conference and an opportunity to talk with a different group of people about the potential of library data.

Photograph of office buildings at Holborn Circus

Holborn Circus – I was struck by the different angles of the buildings


For me two big themes came to mind after this year’s Future of Technology in Education Conference (FOTE). Firstly, around creativity, innovation and co-creation; and secondly about how fundamental data and analytics is becoming.

Creativity, innovation and co-creation

Several of the speakers talked about innovation and creativity.  Dave Coplin talked of the value of Minecraft and Project Spark and the need to create space for creativity, while Bethany Koby showed us examples of some of the maker kits ‘Technology Will Save Us’ are creating.

Others talked of ‘flipping the classroom’ and learning from students as well as co-creation and it was interesting in the Tech start-up pitchfest that a lot of the ideas were student-created tools, some working in the area of collaborative learning.

Data and analytics

The second big trend for me was about analytics and data.  I was particularly interested to see how many of the tools and apps being pitched at the conference had an underlying layer of analytics.  Evaloop which was working in the area of student feedback, Knodium – a space for student collaboration, – offering interaction and sharing tools for video content, Unitu – an issues tracking tool and MyCQs – a learning tool, all seemed to make extensive use of data and analytics, while Fluency included teaching analytics skills.  It is interesting to see how many app developers have learnt the lessons of Amazon and Google of the value of the underlying data.

Final thoughts and what didn’t come up at the conference

I didn’t hear the acronymn MOOC at all – slightly surprising as it was certainly a big theme of last year’s conference.  Has the MOOC bubble passed? or is it just embedded into the mainstream of education?  Similarly Learning Analytics (as a specific theme).  Certainly analytics and data was mentioned (as I’ve noted above) but of Learning Analytics – not a mention, maybe it’s embedded into HE practice now?

Final thoughts on FOTE.  A different focus to previous years but still with some really good sessions and the usual parallel social media back-channels full of interesting conversations. Given that most people arrived with at least one mobile device, power sockets to recharge them were in rather short supply.

FOTE 2012 app screenshot“When did someone from Amazon last come round to your door and say sorry, we’ve changed the interface, would you like some training?” (Dave Coplin from Bing, at FOTE 2012).

I’ve blogged before about the idea that you shouldn’t have to give your users training for them to be able to use your website, so it was quite interesting to hear someone from a large IT company like Bing say pretty much the same thing at FOTE the other week.  And Dave Coplin’s presentation is worth catching up with on the FOTE mediasite (link at the bottom of this blog post).

It was my second time at FOTE and last time one of my reflections was on the amount of effort they had put into getting android and iOS apps for the conference.  So there was a similar set of apps this year, in green rather than yellow and it was certainly good to have everything together in a nice neat app.  One thing though I did notice was that the attendance list in the app was a bit sparse with names.  Not quite sure why but presumably people had to opt-in to have their names included.  In some ways that was a shame as it made it difficult to find out who was there – I only realised that someone who works in the same building as me was at the conference when they asked a question from the audience.  Although a lot of the networking at conferences these days takes place on social networks, mainly twitter and Google Plus, while the conference is taking place, it’s still good to have access to a list of delegates.

Learning Analytics
The first presentation by Cailean Hargrave from IBM talked largely about their work in the area of Learning Analytics, using an example from FE. It was really interesting to see a fully worked through example of the power and reach of learning analytics.  To see the tool being used to drive a portal for staff, students and employers, throughout the student journey was fascinating.  To see examples of how it could be used to make suggestions to students on what they might do to improve their grades I think was really eye-opening and really touched on some of the potentially scary elements of Learning Analytics.  It goes a long way beyond recommendations into areas where you are trying to shape particular behaviours and touches on some of the ethical issues that have been raised about learning analytics.

Research Data
I was also really interested to hear about Figshare a cloud-based respository for researchers data, that plays into the whole open research data agenda, mentioning the recent Royal Society ‘Science as an open enterprise‘ paper and the push by funders towards open access of research data.  The model for the system seems to be supported through a tie-up with an academic publisher and it will be really interesting to see whether this is a sustainable model.  It’s certainly another alternative for researchers and at a time when many institutions are still gearing themselves up to deliver research data management systems is an interesting alternative solution.

For a short one-day conference FOTE packed in a wide range of content, from ipads in learning, through game-based learning, to ebooks and a debate on the hot topic of ‘MOOCs’ Massively Open Online Courses.  Some good things to take away from the day.

Presentations from FOTE are all available from:

Photograph of documents from ALTCA quick trip to Manchester yesterday to take part in a Symposium at ALT-C  on ‘Big Data and Learning Analytics’ with colleagues from the OU (Simon Buckingham Shum, Rebecca Ferguson, Naomi Jeffrey and Kevin Mayles) and Sheila MacNeill from JISC CETIS (who has blogged about the session here).

It was the first time I’d been to ALT-C and it was just a flying visit on the last morning of the conference so I didn’t get the full ALT-C experience.  But I got the impression of a really big conference, well-organised and with lots of different types of sessions going on.  There were 10 sessions taking place at the time we were on, including talks from invited speakers.  So lots of choice of what to see.

But we had a good attendance at the session and there seemed a good mix of people and a good debate and questions during the symposium.  Trying to both summarise an area like Learning Analytics and also give people an idea of the range of activities that are going on is tricky in a one-hour symposium but hopefully gave enough of an idea of some of the work taking place and some of the issues and concerns that there are.

Cross-over with other areas
Sheila had a slide pointing out the overlaps between the Customer Relationship Management systems world, Business Intelligence and Learning Analytics, and it struck me that there’s also another group in the Activity Data world that crosses over.  Much of the work I mentioned (RISE and Huddersfield’s fanstastic work on Library impact)  came out of JISC’s Activity Data funding stream and some of the synthesis project work has been ‘synthesised’ into a website ‘Exploiting activity data in the academic environment’ Many of the lessons learnt that are listed here, particularly around what you can do with the data, are equally relevant to Learning Analytics.  JISC are also producing an Activity Data report in the near future.

Interesting questions
A lot of the questions in the session were as much around the ethics as the practicality.   Particularly interesting was the idea that there were risks of Learning Analytics in encouraging a view that so much could be boiled down to a set of statistics, which sounded a bit like norms to me. The sense-making element seems to be really key, as with so much data and statistics work.

I’d talked a bit about also being able to use the data to make recommendations, something we had experimented with in the RISE project. It was interesting to hear views about the dangers of them reducing rather than expanding choice by narrowing the choices as people are encouraged to select from a list of recommendations which reinforces the recommendations leading to a loop.  If you are making recommendations based on what people on a course looked at then I’d agree that it is a risk, especially as I think there’s a huge probability that people are often going to be looking at resources that they have to look at for their course anyway.

When it comes to other types of recommendations (such as people looking at this article also viewed this other article, and people searching for this search term look at these items) then there is still some chance of recommendations reinforcing a narrow range of content, but I’d suggest that there is still some chance of serendipitous discovery of material that you might not ordinarily have seen.  I’m aware that we’ve very much scratched the surface with recommendations and used simple algorithms that were designed around the idea that the more people who viewed that pattern the better the recommendation.  But it may be that more complex algorithms that throw in some ‘randomness’ might be useful.

One of the elements I think that is useful about the concept of recommendations is that people largely accept them (and perhaps expect them) as they are ubiquitous in sites like Amazon.  And I wonder if you could almost consider them as a personalisation feature that indicates that your service is modern and up-to-date and is engaging with users.  For many library systems that still look to be old-fashioned and ‘librarian’-orientated then perhaps it is equally important to be seen to have these types of features as standard.

Update: Slides from the introductory presentation are here

Latest project
From February I’m going to be involved in a new project, STELLARSemantic Technologies Enhancing the Lifecycle of LeArning Resources (funded by JISC).   In some ways the project connects with previous work I’ve been involved with in the Lucero project in that it will be employing linked data, and will be working with learning materials, in that I’ve had some involvement with our production and presentation learning systems through the VLE.  But STELLAR will be dealing with a different area for me, in that we’ll be looking at my institution’s store of legacy learning materials.   So it’s a good opportunity to learn more about curation and preservation and digital lifecycles.

STELLAR is particularly going to be looking at trying to understand the value of those legacy learning materials by talking to the academics who have been involved in creating those materials.   There are quite a few reasons why older course materials may still have value, they might be able to be reused in new courses on the basis that reusing old materials might be less costly than creating new materials.  They might have value in being able to be transformed into Open Educational Resources.  Or, for example, they might have value in being good historical examples of styles of teaching and learning.  So STELLAR will be exploring different types and models of expressing the value of those materials.

Finding out about the value that is placed on these materials can also be an important factor when trying to understand which materials to preserve as a priority, or where you should expend your resources, and we’d hope that STELLAR would help to inform HE policies as institutions build up increasing amounts of digital learning materials.

As part of STELLAR we will be taking some digital legacy learning material and transforming it into linked data (with some help from our friends in KMi). This gives us the opportunity to connect old course materials into the OU’s ecosystem by linking to existing datasets on current courses and OER material in OpenLearn.  By transforming the content in this way we can then explore whether making it more discoverable changes the value proposition, makes the content more likely to be reused or opens up other possibilities.  It should be an interesting project and one that I’m looking forward to, as there are going to be a lot of opportunties to build up my understanding of these issues and aspects.

I was at the Future of Technology in Education conference on Friday.  Run by ULCC at Senate House in London, it was well attended with over 300 people there.  It was the first time I’d been to FOTE.  It had been recommended as a really good conference so I was interested to see what it was like (and it was a good chance to get out of the office and stop thinking about the new library website for a day).

Reflections on the day

It was a good conference and I’d hope to go again. I’ll probably blog later about my reflections on the content of the conference itself as there are a few thoughts about the way FOTE was run, that I found really interesting.  Firstly, it was pretty much paper-free, with the exception of the name badge which actually unfolded to reveal the conference agenda and details of the conference hashtags.   That was a really neat approach, no A4 printed out agendas or bulky bits of paperwork to carry around with you.   The only other bit of paper, a playing card for their ice-breaker game.

What was really novel was the creation of a set of FOTE mobile and web apps, for iphones, ipads, android and web.  FOTE ipad app screnshot
These had the delegate lists, agenda and details of the speakers, details of the conference location, sponsors, as well as several feeds for the FOTE blog, comments and twitter feeds.  They even include the delegate survey for the conference and the voting for one of the sessions.  I’m really very impressed with the thought that went into this approach.  It’s the first technology conference I’ve been to that seemed to have got to grips with understanding that as the audience was going to be coming armed with an array of ipads, laptops and smartphones, that giving them bits of paper to carry around wasn’t the right message. I wonder how the cost of creating the apps compares with the cost of printing out copies of various bits of paperwork or conference packs that people traditionally give out but it was a really impressive thing to do and it would be good to see it taken up by other tech conferences.

My second reflection was that when you got to the conference venue, the wifi access code and links to the various conference apps were up on posters and displayed with QR codes, making it really easy to link from a smartphone.  That was a good touch.   Conference wifi access was pretty good and reliable considering the number of devices in the conference, I suspect probably more wifi connected devices than delegates.

Final thought was about the twitterwall used for part of the conference sessions.  The transition from one tweet to another was eye-catching.  The previous tweet would clear, often with those letters falling down the screen, leaving just the letters that were in common with the next tweet, which would then appear.  It was a good visual effect although possibly a bit distracting from what was going on in that session maybe.

I do find it fascinating the way that different universities approach the wifi access issue for conference delegates.  ULCC had a separate conference wifi SSID and what seemed to be a daily access code.  But there seem to be a few different approaches.  Maybe something to blog about another time.

When you hear about an educational technology project that’s described as being inspired by Treasure Hunt, ‘… but without the helicopter’ then you know that it’s probably something slightly out of the ordinary.  And ‘Out There and In Here’ is certainly an interesting experiment in making use of educational technology in some innovative ways.  This week’s Coffee Morning session from Anne Adams and Tim Coughlan from the Institute of Educational Technology certainly demonstrated a fascinating approach to carrying out geology field trips, talking about the ‘Out There and In Here’ project which was a collaboration between IET, KMi, the Pervasive Lab, the Science Faculty, Microsoft and OOKL.

Out Here and In There project

Out Here and In There project

Essentially the project ‘Out There and In Here’ looked at carrying out a geology field trip with two teams of postgrad students and instructors.  One team located back at base (the ‘In Here’ team), the other out in the field (‘Out There’).   In part the project was aiming to look at alternatives to field trips, which can be expensive, logistically difficult and not suitable for all students.  But it was also looking at the way the teams interacted, how the dynamics of the learning experience was changed, and how technology can support the learning.

Using laptops, phones and video cameras the teams tried to work together to establish and test several hypotheses.  The ‘Out There’ team used cameras and laptops to record images and data that could be accessed by the team back at base. The ‘In Here’ team used projectors, resources tables and an interactive tabletop to keep track of what was going on.    It was interesting to see how the groups worked together and the dynamics at play.  Both teams seemed to find the exercise challenging and it led to a very different learning experience, particularly in the way that it forced the participants to reflect on what they were doing.  It was interesting that there seemed to be considerable potential for misunderstandings and miscommunication and the project team are looking at how other technologies can help support this type of exercise.

It was a fascinating approach and it’s interesting the way that mobile technology now has the ability to allow this real-time interaction to take place.  I suppose that the most obvious  exponent of this type of real-time interaction now is the military, where video-surveillance, radio and global positionning systems are increasingly being used to allow commanders to direct military operations remotely.  While geology field trips aren’t going to have the range of technology that the military has access to at their disposal, I wonder if  some of the experience the military has with these systems may have any lessons for this type of project in the education sector.

I also started to think about how this type of technology might connect with the work that academic libraries do.  There’s a couple of areas that come to mind – firstly around the management of the data that is being created by the exercise – and secondly around facilitating access to data or information that might be of use to either of the teams.  Is it too much of a stretch to envisage this sort of exercise being supported by a remote librarian who can help with the stream of data coming from and going to the teams?  Ensuring that data is being curated appropriately and connections are made with other data that may be of use to the teams?  All-in-all a fascinating and thought-provoking session.

Twitter posts



July 2020

Creative Commons License