You are currently browsing the category archive for the ‘Analytics’ category.

Photograph of office buildings at Holborn Circus

Holborn Circus – I was struck by the different angles of the buildings

Themes

For me two big themes came to mind after this year’s Future of Technology in Education Conference (FOTE). Firstly, around creativity, innovation and co-creation; and secondly about how fundamental data and analytics is becoming.

Creativity, innovation and co-creation

Several of the speakers talked about innovation and creativity.  Dave Coplin talked of the value of Minecraft and Project Spark and the need to create space for creativity, while Bethany Koby showed us examples of some of the maker kits ‘Technology Will Save Us’ are creating.

Others talked of ‘flipping the classroom’ and learning from students as well as co-creation and it was interesting in the Tech start-up pitchfest that a lot of the ideas were student-created tools, some working in the area of collaborative learning.

Data and analytics

The second big trend for me was about analytics and data.  I was particularly interested to see how many of the tools and apps being pitched at the conference had an underlying layer of analytics.  Evaloop which was working in the area of student feedback, Knodium – a space for student collaboration, Reframed.tv – offering interaction and sharing tools for video content, Unitu – an issues tracking tool and MyCQs – a learning tool, all seemed to make extensive use of data and analytics, while Fluency included teaching analytics skills.  It is interesting to see how many app developers have learnt the lessons of Amazon and Google of the value of the underlying data.

Final thoughts and what didn’t come up at the conference

I didn’t hear the acronymn MOOC at all – slightly surprising as it was certainly a big theme of last year’s conference.  Has the MOOC bubble passed? or is it just embedded into the mainstream of education?  Similarly Learning Analytics (as a specific theme).  Certainly analytics and data was mentioned (as I’ve noted above) but of Learning Analytics – not a mention, maybe it’s embedded into HE practice now?

Final thoughts on FOTE.  A different focus to previous years but still with some really good sessions and the usual parallel social media back-channels full of interesting conversations. Given that most people arrived with at least one mobile device, power sockets to recharge them were in rather short supply.

To Birmingham at the start of last week for the latest Jisc Library Analytics and Metrics Project (http://jisclamp.mimas.ac.uk/) Community Advisory and Planning group meeting.  This was a chance to catchup with both the latest progress and also the latest thinking about how this library analytics and metrics work will develop.

At a time when learning analytics is a hot topic it’s highly relevant to libraries to consider how they might respond to the challenges of learning analytics. [The 2014 Horizon report has learning analytics in the category of one year or less to adoption and describes it as ‘data analysis to inform decisions made on every tier of the education system, leveraging student data to deliver personalized learning, enable adaptive pedagogies and practices, and identify learning issues in time for them to be solved.’

LAMP is looking at library usage data of the sort that libraries collect routinely (loans, gate counts, eresource usage) but combines it with course, demographic and achievement data to allow libraries to start to be able to analyse and identify trends and themes from the data.

LAMP will build a tool to store and analyse data and is already working with some pilot institutions to design and fine-tune the tool.  We got to see some of the work so far and input into some of the wireframes and concepts, as well as hear about some of the plans for the next few months.

The day was also the chance to hear from the developers of a reference management tool called RefMe (www.refme.com).  This referencing tool is aimed at students who often struggle with the typically complex requirements of referencing styles and tools.  To hear about one-click referencing, with thousands of styles and with features to intergrate with MS Word, or to scan in a barcode and reference a book, was really good.  RefMe is available as an iOS or Android app and as a desktop version.  As someone who’s spent a fair amount of time wrestling with the complexities of referencing in projects that have tried to get simple referencing tools in front of students it is really good to see a start-up tackling this area.

I picked up over the weekend via the No Shelf Required blog that EBSCO Discovery usage data is now being added into Plum Analytics.    EBSCO’s press release talks about providing “researchers with a much more comprehensive view of the overall impact of a particular article”.   Plum Analytics have fairly recently been taken over by EBSCO (and here) so it’s not so surprising that they’d be looking at how EBSCO’s data could enhance the metrics available through Plum Analytics.

It’s interesting to see the different uses that activity data in this sphere can be put to.  There are examples of it being used to drive recommendations, such as hot articles, or Automated Contextual Research Assistance. LAMP is talking of using activity data for benchmarking purposes.  So you’re starting to see a clutch of services-being driven by activity data just as the like’s of Amazon drive so much of what appears on their sales site by data driven by customer activity.

Visual.ly google analytics infographic screenshotInfographics and data visualisations seem to be very popular at the moment and for a while I’ve been keeping an eye on visual.ly as they have some great infographics and data visualisations.  One of the good things about the visual.ly infographics is that there is some scope to customise them.  So for example there is one about the ‘Life of a hashtag’ that you can customise and several others around facebook and twitter that you can use.

I picked up on twitter the other week that they had just brought out a Google Analytics infographic.  That immediately got my interest as we make a lot of use of GA.  You just point it to your site through your Google Analytics account and then get a weekly email ‘Your weekly insights’ created dynamically from your Google Analytics data.

It’s a very neat idea and quite a useful promotional tool to give people a quick snapshot of what is going on.  So you get Pageviews over the past three weeks, what the tVisual.ly google analytics infographic screenshotrends are for New and Returning Visitors and reports on Pages per visit and Time on site and how that has changed in the past week.

It’s quite useful for social media traffic showing how facebook and twitter traffic has changed over the past week and as these types of media are things that you often want quite quick feedback on it is a nice visual way of being able to show what difference a particular activity might have had.

Obviously as a free tool, there’s a limit to the customisation you can do.  So it might be nice to have visits or unique visitors to measure change in use of the site, or your top referrals, or particular Visual.ly google analytics infographic screenshotpages that have been used most frequently. The time period is something that possibly makes it less useful for me in that I’m more likely to be want to compare against the previous month (or even this month last year).  But no doubt visual.ly would build a custom version for you if you wanted something particular.

But as a freely available tool it’s a useful thing to have.  The infographic is nicely presented and gives a visually appealing presentation of analytics data that can often be difficult to present to audiences who don’t necessarily understand the intricacies of web analytics.

The Google Analytics Visual.ly infographic is at https://create.visual.ly/graphic/google-analytics/

Encouraged by some thinking about what sort of prototype resource usage tools we want to build to test with users in a forthcoming ‘New tools’ section I’ve been starting to think about what sort of features you could offer to library users to let them take advantage of library data.

Early steps
For a few months we’ve been offering users of our mobile search interface (which just does a search of our EBSCO discovery system) a list of their recently viewed items and their recent searches. The idea behind testing it on a mobile device Mobile search results screenwas that giving people a link to their recent searches or items viewed would make it easier for people to get back to things that they had accessed on their mobile device by just clicking single links rather than having to bookmark them or type in fiddly links. At the moment the tool just lists the resources and searches you’ve done through the mobile interface.

But our next step is to make a similar tool available through our main library website as a prototype of the ‘articles I’ve viewed’. And that’s where we start to wonder about whether the mobile version of the searches/results should be kept separate from the rest of your activities, or whether user expectations would be that, like a Kindle ebook that you can sync across multiple devices, your searches and activity should be consistent across all platforms?

At the moment our desktop version has all your viewed articles, regardless of the platform you used. But users might want to know in future which device they used to access the material maybe? Perhaps because some material isn’t easily accessible through a mobile device. But that opens up another question, in that the mobile version and the desktop version may be different URLs so you might want them to be pulled together as one resource with automatic detection of your device when you go to access the resource. Articles I've read screenshot

Next steps
With the data about what resources are being accessed and what library web pages are being accessed it starts to open up the possibility of some more user-centred use of library activity and analytics data.

So you could conceive of being able to match that there is a spike of users accessing the Athens problems FAQ page and be able to tie that to users trying to access Athens-authenticated resources. Being able to match activity with students being on a particular module could allow you to push automatically some more targeted help material, maybe into the VLE website for relevant modules, as well as flag up an indication of a potential issue to the technical and helpdesk teams.

You could also contemplate mining reading lists and course schedules to predict when there are particular activities that are scheduled and automatically schedule pushing relevant help and support or online tutorials to students. Some of the most interesting areas seem to me to be around building skills and using activity (or lack of activity) to trigger promotion of targeted skills building activities. So knowing that students on module X should be doing an activity that involves looking at this set of resources, and being able to detect the students that haven’t accessed those resources, offering them some specific help material, or even contact from a librarian. Realistically those sorts of interventions simply couldn’t be managed manually and would have to rely on some form of learning analytics-type trigger system.

One of the areas that would be useful to look at would be some form of student dashboard for library engagement. So this could give students some data about what engagement they have had with the library, e.g. resources accessed, library skills completed, library badges gained, library visits, books/ebooks borrowed etc. Maybe set against averages for their course, and perhaps with some metrics about what high-achieving students on their course last time did. Add to that a bookmarking feature, lists of recent searches and resources used, with lists of loans/holds. Finished off with useful library contacts and some suggested activities that might help them with their course based on what is know about the level of library skills needed in the course.

Before you can do some of the more sophisticated learning analytics-type activities I suspect it would be necessary is to have a better understanding of the impact that library activities/skills/resources have on student retention and achievement. And that seems to me to argue for some really detailed work to understand library impact at a ‘pedagogic’ level.

Photograph of documents from ALTCA quick trip to Manchester yesterday to take part in a Symposium at ALT-C  on ‘Big Data and Learning Analytics’ with colleagues from the OU (Simon Buckingham Shum, Rebecca Ferguson, Naomi Jeffrey and Kevin Mayles) and Sheila MacNeill from JISC CETIS (who has blogged about the session here).

It was the first time I’d been to ALT-C and it was just a flying visit on the last morning of the conference so I didn’t get the full ALT-C experience.  But I got the impression of a really big conference, well-organised and with lots of different types of sessions going on.  There were 10 sessions taking place at the time we were on, including talks from invited speakers.  So lots of choice of what to see.

But we had a good attendance at the session and there seemed a good mix of people and a good debate and questions during the symposium.  Trying to both summarise an area like Learning Analytics and also give people an idea of the range of activities that are going on is tricky in a one-hour symposium but hopefully gave enough of an idea of some of the work taking place and some of the issues and concerns that there are.

Cross-over with other areas
Sheila had a slide pointing out the overlaps between the Customer Relationship Management systems world, Business Intelligence and Learning Analytics, and it struck me that there’s also another group in the Activity Data world that crosses over.  Much of the work I mentioned (RISE and Huddersfield’s fanstastic work on Library impact)  came out of JISC’s Activity Data funding stream and some of the synthesis project work has been ‘synthesised’ into a website ‘Exploiting activity data in the academic environment’ http://www.activitydata.org/ Many of the lessons learnt that are listed here, particularly around what you can do with the data, are equally relevant to Learning Analytics.  JISC are also producing an Activity Data report in the near future.

Interesting questions
A lot of the questions in the session were as much around the ethics as the practicality.   Particularly interesting was the idea that there were risks of Learning Analytics in encouraging a view that so much could be boiled down to a set of statistics, which sounded a bit like norms to me. The sense-making element seems to be really key, as with so much data and statistics work.

I’d talked a bit about also being able to use the data to make recommendations, something we had experimented with in the RISE project. It was interesting to hear views about the dangers of them reducing rather than expanding choice by narrowing the choices as people are encouraged to select from a list of recommendations which reinforces the recommendations leading to a loop.  If you are making recommendations based on what people on a course looked at then I’d agree that it is a risk, especially as I think there’s a huge probability that people are often going to be looking at resources that they have to look at for their course anyway.

When it comes to other types of recommendations (such as people looking at this article also viewed this other article, and people searching for this search term look at these items) then there is still some chance of recommendations reinforcing a narrow range of content, but I’d suggest that there is still some chance of serendipitous discovery of material that you might not ordinarily have seen.  I’m aware that we’ve very much scratched the surface with recommendations and used simple algorithms that were designed around the idea that the more people who viewed that pattern the better the recommendation.  But it may be that more complex algorithms that throw in some ‘randomness’ might be useful.

One of the elements I think that is useful about the concept of recommendations is that people largely accept them (and perhaps expect them) as they are ubiquitous in sites like Amazon.  And I wonder if you could almost consider them as a personalisation feature that indicates that your service is modern and up-to-date and is engaging with users.  For many library systems that still look to be old-fashioned and ‘librarian’-orientated then perhaps it is equally important to be seen to have these types of features as standard.

Update: Slides from the introductory presentation are here

Having read Matthiew Reidsma’s blog post recently on how the fold metaphor in web design doesn’t really exist I was intrigued to see that the latest version of Google’s In-page Analytics has introduced a ‘fold’ feature to show how much web page activity takes place below a certain point on the page.   The  ‘fold’ idea is connected to a design concept that essentially says that people only look at what they see immediately in front of them on a web page and that they don’t scroll up and down the screen.

In the latest version of Google Analytics In-Page Analytics you get an orange line that slides up and down the page to show how much activity takes place below that line.   Because of the way that analytics handles traffic to external links by adding the traffic figures together it isn’t all that accurate a tool, but I find it is interesting that Google saw the need to introduce this sort of feature.  Making the feature slide up and down looks like the thought was that you could use it as a tool to plan where you might put the most important content.  But I’m not convinced that it is all that useful as the tool only moves up and down vertically, it doesn’t move from left to right.  And critically for me it doesn’t really represent how your users viewed your content. To make the tool work I think I’d want to segment the users by people using a particular resolution and then look at the In-Page Analytics for that segment only.  I need to do some investigation to see if segmenting people by screen resolution is feasible.

Thinking about screen resolutions made me check back to the Google Analytics data to see what screen resolutions people use to access one of our sites.  While nearly 60% are using just four different screen resolutions from 1024 upwards there have been a total of 1,326 different screen resolutions in just three months.   That seems to me to be an astonishing number but it’s probably a reflection of two things.  Firstly that we are getting more people using mobile devices, both phones and tablets.  Secondly I think it reflects the fact that our latest site has been designed to cope with a wide variety of screen resolutions (largely as a design feature to allow it to work on phones and tablets) and as a consequence if users want to resize their screen to pretty much any resolution they want, the content should reflow reasonably well.

Harvard Library Innovation Laboratory
http://librarylab.law.harvard.edu/

The second aspect of data that caught my interest today was Harvard’s Library Innovation Laboratory.  I must admit that when I saw the link to it I did wonder whether it was going to be a list of library tools aimed directly at users (I’m sure I’ve seen the name used elsewhere recently for just such a list).  I know we are looking at redoing our library toolbox to update it and library lab or labspace sounded like a good name for something like that. But the Library Innovation Laboratory is much more interesting proof of concept for anyone with any interest in what you can do with library activity data.

Using library circulation data that has been contributed to the LibraryCloud there are some really imaginative prototype visualisations in the Stack View and Shelf Rank tools.  Two values are shown instantly.  The book width is determined by the numbers of pages in the book and the book colour corresponds to the volume of loans so the darker the blue the greater the traffic.  ShelfLife screenshot Titles are then shown as a stack one on top of each other.   It’s a really neat visualisation of the data and I’m already wondering if that approach would work equally well with visualising library data that is entirely electronic resources.  [It’s actually one of the big problems about anything to do with electronic resources – that there isn’t really a universal icon or symbol that you can use that everyone recognises that it relates to stuff that is online and in electronic form].

There’s quite a lot of interesting stuff in the site and also in the LibraryCloud site at www.librarycloud.org. One of the things that particularly interested me (from experiences with the RISE Activity Data project) was the section about data privacy and anonymisation, as a key requirement always has to be that with any dataset where the aspiration is for open release, it must be prepared in a way that ensures that users are unable to be identified individually.

The checkout visualisation is also a neat way of showing that sort of data in a nice clear fashion. Checkout screenshot The feature that lets you sort the data by different schools is useful and slightly brings to mind one of the MOSAIC competition entries that used a graph-type visualisation that allowed you to navigate through library use data.  It did amuse me though that ‘Headphones’ appears twice in the top ten with different numbers.   The perils of libraries using their Library Management Systems to loan all sorts of other things!
LibraryCloud screenshot
http://www.librarycloud.org

LibraryCloud currently has data from Harvard and Northeastern Universities and Darien, San Francisco and San Jose public libraries.  A couple of sites to keep an eye on over the next few months.

Courtesy of a couple of tweets from @psychemedia and @simonjbains two items about data and data visualisation caught my attention today on twitter.  Firstly a great post by Pete Warden ‘What the Sumerians can teach us about data’ on his PeteSearch blog and secondly Harvard’s Library Innovation Laboratory.   Both items covering particular aspects of data, one talking about the history of data, the other a great set of examples of how to use and visualise data, in Harvard’s case library circulation data using the LibraryCloud library metadata repository.

A history of data
I found the blog post on the Sumerians to be particularly interesting.   The starting point is the contention that their greatest achievement was the invention of data and there are some good examples of how the written language was used to record who owned what (or who owed what to whom).  I like the comparison made between the ‘threats of supernatural retribribution’  being used to protect the integrity of the data with modern warnings over video copying, both being ‘ways of forcefully expressing society’s norms, rather than a credible threat of punishment’

It find it interesting how often we seem to find that early examples of writing often turn out to be lists, in other words data rather than stories.  Another example that comes to mind are the Vindolanda tablets.  These are from the Roman period and found during excavations at a roman fort in Northern England.

“… for dining pair(s) of blankets … paenulae, white (?) … from an outfit: paenulae … and a laena and a (?) … for dining loose robe(s) … under-paenula(e) … vests … from Tranquillus
under-paenula(e) … [[from Tranquillus]]
from Brocchus tunics … half-belted (?) … tunics for dining (?) … (Back, 2nd hand?) … branches (?), number … a vase …
with a handle rings with stones (?) …”
Writing lists of things seems to have been a recurrent story and it strikes me that being able to list and count things must have been an early skill that would have to have been mastered by early farmers at least.  To my mind there’s no reason to suppose that early peoples would have been any less intelligent than modern day people.  And as the archaeologists are fond of pointing out ‘absence of evidence isn’t evidence of absence’ so there’s no reason to suppose that people weren’t collecting lists of data long before the Sumerians maybe?

I also thought the comments making a comparion between instructions for interpreting omens and predicting the future from data to be really interesting.  A great deal is often made of the importance of ‘facts and data’ and it has long seemed to me that the critical factor isn’t the data that you have, but how you interpret it and what decisions you make.  And it often seems to me that the interpretation of data and decision making is a much less scientific exercise.

Part two covering the Harvard Library Innovation Laboratory to follow in the next blog post.

I thought I’d cover two quite different things in this blog post but thinking about them there is actually a connection between them in that they are both elements of how users value academic libraries and how that can be seen and measured.    One of them was a library seminar presented by Carol Tenopir from the University of Tennessee talking about measuring library outcomes and value.  The other, Huddersfield’s new Lemon Tree library learning game, where users get points for carrying out library activities.

Lemon Tree
This is a new game just launched by Huddersfield at https://library.hud.ac.uk/lemontree/  Created by developers Running in the Halls the library game gives users points for carrying out activities in the library and using library resources.  In their words:

‘You get points for doing all sorts of things in and around the library like; visiting it, borrowing items, doing things at specific hours, returning items in certain combinations and much more…’Screenshot of Lemon Tree website at Huddersfield

There’s also a project blog here with some useful technical details about how the system has been setup.  The game lets users login with a facebook login, something that will be really familar to students, and the site itself has the modern, informal look that is a world away from the usual formal, stuffy academic library sites.

It will be interesting to see how the game takes off with users.  Huddersfield’s LIDP work seems to have established a connection between library use and student achievement, so it will be really fascinating to see how Lemon Tree might encourage more student use of the library and how it may affect student behaviour.

Whether something like this would work in every academic environment is something I’d wonder.  It might appeal more to students who are particularly engaged with social networking.  With students becoming ever more focussed on doing what they need to do to get the best out of their investment then they might want to know what they get in return for playing the game.  I’m looking forward to hearing more about Lemon Tree.

Carol Tenopir library seminar on the value of academic libraries
It’s always useful to hear about ways of measuring the value of libraries in ways that go beyond usage statistics.  So it was really good to hear Carol Tenopir talking about some of the work to come out of her recent projects and particularly from the Lib-Value project.

Carol Tenopir is a Chancellor’s Professor at the School of Information Sciences at the University of Tennessee, Knoxville and the Director of Research for the College of Communication and Information, and Director of the Center for Information and Communication Studies.

Ranging from Fritz Machlup’s definitions of value in terms of purchase or exchange value (what you are willing to pay) and use value (described as ‘favourable consequences derived from reading and using information’) through Economic, Social and Environmental values as used by Bruce Kingma and on to Implicit values (usage), Explicit values (outcomes) and Derived values (Return on Investment) we had a thorough introduction to some of work that is going on in this area.

What was particularly useful was to hear about the Critical Incident technique used in reading and scholarship surveys.  In this case academics are asked in detail about the last article they read (the ‘Critical Incident’).  These surveys have shown that the majority of the articles are being supplied by the library, but not read in the library.  Over half of the academics surveyed said that the outcome of reading the article was ‘New thinking’.

Carol also talked about Return on Investment and particularly contingent valuation, an economic model that tries to calcluate how much it would cost to do the things the library does, if you had to do them yourself.  So instead of the library buiying a subscription to that electronic journal, how much would it cost you on a pay-per-view basis.  It was particularly useful to find out about the National Network of Libraries of Medicine value calculator (and here).

All-in-all a really good hour with lots of useful techniques and information about different ways of thinking about the value of academic libraries. (A video of this seminar is now available here).  For me, what is interesting about these two items is that both covered value as being expressed directly by what users say (the critical incident) or what they do (the Lemon Tree library activities).

Twitter posts

Categories

Calendar

November 2014
M T W T F S S
« Oct    
 12
3456789
10111213141516
17181920212223
24252627282930

Creative Commons License

Follow

Get every new post delivered to your Inbox.

Join 44 other followers