You are currently browsing the category archive for the ‘Analytics’ category.
Infographics and data visualisations seem to be very popular at the moment and for a while I’ve been keeping an eye on visual.ly as they have some great infographics and data visualisations. One of the good things about the visual.ly infographics is that there is some scope to customise them. So for example there is one about the ‘Life of a hashtag’ that you can customise and several others around facebook and twitter that you can use.
I picked up on twitter the other week that they had just brought out a Google Analytics infographic. That immediately got my interest as we make a lot of use of GA. You just point it to your site through your Google Analytics account and then get a weekly email ‘Your weekly insights’ created dynamically from your Google Analytics data.
It’s a very neat idea and quite a useful promotional tool to give people a quick snapshot of what is going on. So you get Pageviews over the past three weeks, what the trends are for New and Returning Visitors and reports on Pages per visit and Time on site and how that has changed in the past week.
It’s quite useful for social media traffic showing how facebook and twitter traffic has changed over the past week and as these types of media are things that you often want quite quick feedback on it is a nice visual way of being able to show what difference a particular activity might have had.
Obviously as a free tool, there’s a limit to the customisation you can do. So it might be nice to have visits or unique visitors to measure change in use of the site, or your top referrals, or particular pages that have been used most frequently. The time period is something that possibly makes it less useful for me in that I’m more likely to be want to compare against the previous month (or even this month last year). But no doubt visual.ly would build a custom version for you if you wanted something particular.
But as a freely available tool it’s a useful thing to have. The infographic is nicely presented and gives a visually appealing presentation of analytics data that can often be difficult to present to audiences who don’t necessarily understand the intricacies of web analytics.
The Google Analytics Visual.ly infographic is at https://create.visual.ly/graphic/google-analytics/
Encouraged by some thinking about what sort of prototype resource usage tools we want to build to test with users in a forthcoming ‘New tools’ section I’ve been starting to think about what sort of features you could offer to library users to let them take advantage of library data.
For a few months we’ve been offering users of our mobile search interface (which just does a search of our EBSCO discovery system) a list of their recently viewed items and their recent searches. The idea behind testing it on a mobile device was that giving people a link to their recent searches or items viewed would make it easier for people to get back to things that they had accessed on their mobile device by just clicking single links rather than having to bookmark them or type in fiddly links. At the moment the tool just lists the resources and searches you’ve done through the mobile interface.
But our next step is to make a similar tool available through our main library website as a prototype of the ‘articles I’ve viewed’. And that’s where we start to wonder about whether the mobile version of the searches/results should be kept separate from the rest of your activities, or whether user expectations would be that, like a Kindle ebook that you can sync across multiple devices, your searches and activity should be consistent across all platforms?
At the moment our desktop version has all your viewed articles, regardless of the platform you used. But users might want to know in future which device they used to access the material maybe? Perhaps because some material isn’t easily accessible through a mobile device. But that opens up another question, in that the mobile version and the desktop version may be different URLs so you might want them to be pulled together as one resource with automatic detection of your device when you go to access the resource.
With the data about what resources are being accessed and what library web pages are being accessed it starts to open up the possibility of some more user-centred use of library activity and analytics data.
So you could conceive of being able to match that there is a spike of users accessing the Athens problems FAQ page and be able to tie that to users trying to access Athens-authenticated resources. Being able to match activity with students being on a particular module could allow you to push automatically some more targeted help material, maybe into the VLE website for relevant modules, as well as flag up an indication of a potential issue to the technical and helpdesk teams.
You could also contemplate mining reading lists and course schedules to predict when there are particular activities that are scheduled and automatically schedule pushing relevant help and support or online tutorials to students. Some of the most interesting areas seem to me to be around building skills and using activity (or lack of activity) to trigger promotion of targeted skills building activities. So knowing that students on module X should be doing an activity that involves looking at this set of resources, and being able to detect the students that haven’t accessed those resources, offering them some specific help material, or even contact from a librarian. Realistically those sorts of interventions simply couldn’t be managed manually and would have to rely on some form of learning analytics-type trigger system.
One of the areas that would be useful to look at would be some form of student dashboard for library engagement. So this could give students some data about what engagement they have had with the library, e.g. resources accessed, library skills completed, library badges gained, library visits, books/ebooks borrowed etc. Maybe set against averages for their course, and perhaps with some metrics about what high-achieving students on their course last time did. Add to that a bookmarking feature, lists of recent searches and resources used, with lists of loans/holds. Finished off with useful library contacts and some suggested activities that might help them with their course based on what is know about the level of library skills needed in the course.
Before you can do some of the more sophisticated learning analytics-type activities I suspect it would be necessary is to have a better understanding of the impact that library activities/skills/resources have on student retention and achievement. And that seems to me to argue for some really detailed work to understand library impact at a ‘pedagogic’ level.
A quick trip to Manchester yesterday to take part in a Symposium at ALT-C on ‘Big Data and Learning Analytics’ with colleagues from the OU (Simon Buckingham Shum, Rebecca Ferguson, Naomi Jeffrey and Kevin Mayles) and Sheila MacNeill from JISC CETIS (who has blogged about the session here).
It was the first time I’d been to ALT-C and it was just a flying visit on the last morning of the conference so I didn’t get the full ALT-C experience. But I got the impression of a really big conference, well-organised and with lots of different types of sessions going on. There were 10 sessions taking place at the time we were on, including talks from invited speakers. So lots of choice of what to see.
But we had a good attendance at the session and there seemed a good mix of people and a good debate and questions during the symposium. Trying to both summarise an area like Learning Analytics and also give people an idea of the range of activities that are going on is tricky in a one-hour symposium but hopefully gave enough of an idea of some of the work taking place and some of the issues and concerns that there are.
Cross-over with other areas
Sheila had a slide pointing out the overlaps between the Customer Relationship Management systems world, Business Intelligence and Learning Analytics, and it struck me that there’s also another group in the Activity Data world that crosses over. Much of the work I mentioned (RISE and Huddersfield’s fanstastic work on Library impact) came out of JISC’s Activity Data funding stream and some of the synthesis project work has been ‘synthesised’ into a website ‘Exploiting activity data in the academic environment’ http://www.activitydata.org/ Many of the lessons learnt that are listed here, particularly around what you can do with the data, are equally relevant to Learning Analytics. JISC are also producing an Activity Data report in the near future.
A lot of the questions in the session were as much around the ethics as the practicality. Particularly interesting was the idea that there were risks of Learning Analytics in encouraging a view that so much could be boiled down to a set of statistics, which sounded a bit like norms to me. The sense-making element seems to be really key, as with so much data and statistics work.
I’d talked a bit about also being able to use the data to make recommendations, something we had experimented with in the RISE project. It was interesting to hear views about the dangers of them reducing rather than expanding choice by narrowing the choices as people are encouraged to select from a list of recommendations which reinforces the recommendations leading to a loop. If you are making recommendations based on what people on a course looked at then I’d agree that it is a risk, especially as I think there’s a huge probability that people are often going to be looking at resources that they have to look at for their course anyway.
When it comes to other types of recommendations (such as people looking at this article also viewed this other article, and people searching for this search term look at these items) then there is still some chance of recommendations reinforcing a narrow range of content, but I’d suggest that there is still some chance of serendipitous discovery of material that you might not ordinarily have seen. I’m aware that we’ve very much scratched the surface with recommendations and used simple algorithms that were designed around the idea that the more people who viewed that pattern the better the recommendation. But it may be that more complex algorithms that throw in some ‘randomness’ might be useful.
One of the elements I think that is useful about the concept of recommendations is that people largely accept them (and perhaps expect them) as they are ubiquitous in sites like Amazon. And I wonder if you could almost consider them as a personalisation feature that indicates that your service is modern and up-to-date and is engaging with users. For many library systems that still look to be old-fashioned and ‘librarian’-orientated then perhaps it is equally important to be seen to have these types of features as standard.
Update: Slides from the introductory presentation are here
Having read Matthiew Reidsma’s blog post recently on how the fold metaphor in web design doesn’t really exist I was intrigued to see that the latest version of Google’s In-page Analytics has introduced a ‘fold’ feature to show how much web page activity takes place below a certain point on the page. The ‘fold’ idea is connected to a design concept that essentially says that people only look at what they see immediately in front of them on a web page and that they don’t scroll up and down the screen.
In the latest version of Google Analytics In-Page Analytics you get an orange line that slides up and down the page to show how much activity takes place below that line. Because of the way that analytics handles traffic to external links by adding the traffic figures together it isn’t all that accurate a tool, but I find it is interesting that Google saw the need to introduce this sort of feature. Making the feature slide up and down looks like the thought was that you could use it as a tool to plan where you might put the most important content. But I’m not convinced that it is all that useful as the tool only moves up and down vertically, it doesn’t move from left to right. And critically for me it doesn’t really represent how your users viewed your content. To make the tool work I think I’d want to segment the users by people using a particular resolution and then look at the In-Page Analytics for that segment only. I need to do some investigation to see if segmenting people by screen resolution is feasible.
Thinking about screen resolutions made me check back to the Google Analytics data to see what screen resolutions people use to access one of our sites. While nearly 60% are using just four different screen resolutions from 1024 upwards there have been a total of 1,326 different screen resolutions in just three months. That seems to me to be an astonishing number but it’s probably a reflection of two things. Firstly that we are getting more people using mobile devices, both phones and tablets. Secondly I think it reflects the fact that our latest site has been designed to cope with a wide variety of screen resolutions (largely as a design feature to allow it to work on phones and tablets) and as a consequence if users want to resize their screen to pretty much any resolution they want, the content should reflow reasonably well.
I thought I’d cover two quite different things in this blog post but thinking about them there is actually a connection between them in that they are both elements of how users value academic libraries and how that can be seen and measured. One of them was a library seminar presented by Carol Tenopir from the University of Tennessee talking about measuring library outcomes and value. The other, Huddersfield’s new Lemon Tree library learning game, where users get points for carrying out library activities.
This is a new game just launched by Huddersfield at https://library.hud.ac.uk/lemontree/ Created by developers Running in the Halls the library game gives users points for carrying out activities in the library and using library resources. In their words:
There’s also a project blog here with some useful technical details about how the system has been setup. The game lets users login with a facebook login, something that will be really familar to students, and the site itself has the modern, informal look that is a world away from the usual formal, stuffy academic library sites.
It will be interesting to see how the game takes off with users. Huddersfield’s LIDP work seems to have established a connection between library use and student achievement, so it will be really fascinating to see how Lemon Tree might encourage more student use of the library and how it may affect student behaviour.
Whether something like this would work in every academic environment is something I’d wonder. It might appeal more to students who are particularly engaged with social networking. With students becoming ever more focussed on doing what they need to do to get the best out of their investment then they might want to know what they get in return for playing the game. I’m looking forward to hearing more about Lemon Tree.
Carol Tenopir library seminar on the value of academic libraries
It’s always useful to hear about ways of measuring the value of libraries in ways that go beyond usage statistics. So it was really good to hear Carol Tenopir talking about some of the work to come out of her recent projects and particularly from the Lib-Value project.
Carol Tenopir is a Chancellor’s Professor at the School of Information Sciences at the University of Tennessee, Knoxville and the Director of Research for the College of Communication and Information, and Director of the Center for Information and Communication Studies.
Ranging from Fritz Machlup’s definitions of value in terms of purchase or exchange value (what you are willing to pay) and use value (described as ‘favourable consequences derived from reading and using information’) through Economic, Social and Environmental values as used by Bruce Kingma and on to Implicit values (usage), Explicit values (outcomes) and Derived values (Return on Investment) we had a thorough introduction to some of work that is going on in this area.
What was particularly useful was to hear about the Critical Incident technique used in reading and scholarship surveys. In this case academics are asked in detail about the last article they read (the ‘Critical Incident’). These surveys have shown that the majority of the articles are being supplied by the library, but not read in the library. Over half of the academics surveyed said that the outcome of reading the article was ‘New thinking’.
Carol also talked about Return on Investment and particularly contingent valuation, an economic model that tries to calcluate how much it would cost to do the things the library does, if you had to do them yourself. So instead of the library buiying a subscription to that electronic journal, how much would it cost you on a pay-per-view basis. It was particularly useful to find out about the National Network of Libraries of Medicine value calculator (and here).
All-in-all a really good hour with lots of useful techniques and information about different ways of thinking about the value of academic libraries. (A video of this seminar is now available here). For me, what is interesting about these two items is that both covered value as being expressed directly by what users say (the critical incident) or what they do (the Lemon Tree library activities).
As we’ve worked our way through the various stages of the library website project we’ve used a number of different tools and techniques. These have included tools to find out what works on the current site, what users think, to plan how the content should be arranged and to engage with users and staff. As we draw towards the launch of the site it seems like a good time to reflect on those tools, how we have used them and what they have told us.
We’ve been using Google Analytics www.google.com/analytics for some time and in many ways it provides the foundation for our work around the website. So it can be used to identify basic things such as which pages are being used in your current site (and which pages aren’t) and the paths that people use through your site. It can tell you where people are coming from to visit your site – so we know that a large number of our users come from our institutional VLE, which has informed our decisions about some of the terminology we use in the new site – we’re using Library resources to describe our ‘stuff’ to be consistent with the VLE. Analytics gives us a vast amount of data and interpreting that data is key to any redesign project.
Card sort exercises
It’s a pretty well established technique to use card sorting exercises to help with developing the information architecture of the site e.g. http://en.wikipedia.org/wiki/Card_sorting As an early part of the design work we carried out this type of exercise with a group of library staff to try to get an idea of a sensible information architecture for the new site. We ended up using it very much as a starting point rather than a finished article as we were keen to test it with users to validate it. On reflection it does seem to be hard to get people to visualise how the website information architecture will translate to a navigation system in a real website.
An almost obligatory component of workshops, often found in combination with post-it notes and card-sort exercises. Even in these digital times they still seem an inevitable element (along with post-it notes) when a group of people get together to plan something.
Once we had come up with a prototype information architecture we wanted to test it with users to see if it made sense to them. There are a few tools out there that allow you to setup quick tests for users to complete. Essentially they allow you to ask users to navigate through a website structure to find a particular page. They test whether your information architecture makes sense to users. We went for a tool called Plainframe http://plainframe.com It costs a small amount of money but had the advantage for us that the pricing was based on the number of tests you ran rather being time-limited. We were able to offer the test to a group of users and it was certainly useful to see how they reacted to the site and has led to some tweaks in the IA.
We decided fairly early on that we wanted to find some different ways to engage with users. So one of the techniques we used was to run a quick poll on the library website to ask students about features they’d want to see, particularly around ‘induction-type’ content. Positioned prominently on the website homepage we got really good reaction with several hundred responses that greatly helped with defining the content for this section.
The key document for the project was a specification. This set out the audience for the site, the page layouts, the information architecture and navigation. So it was created as an output from the workshops, surveys and exercises. The intention was that it would be focus for discussions about what went into the site and end up as a tool that we would use to get agreement over how the site was to be created. It probably didn’t work quite as well as we’d expected, we found it difficult to get people to engage with the document and visualise what it would look like when turned into a real website. And we ended up having to make changes to things like the IA once the site structure started to take shape.
In an ideal world with a project of this nature you want a functional specification that is created and agreed before any development work starts. In reality it is diificult to do that when you move to a new platform like Drupal as you don’t always know when you start exactly what is and isn’t possible. Users often need to see some form of prototype to be able to decide what they want, and a paper prototype (whilst useful) isn’t always enough.
One of the starting points for us was the results (and particularly the detail) from the Library Survey that was carried out in 2010. Although the results were good it did identify some particular problems with search and accessing library resources.
We’ve also been conscious throughout the project that there is a big issue around terminology (something that libraries seem to have a particular problem with). Users seem to struggle with library terminology so we used further surveys using a tool called Survey Monkey (www.surveymonkey.com) to design questionnaires for users to find out their preferences on the information architecture and terminology of the site. We will also use SurveyMonkey to capture some structured feedback from users once the site goes into the beta and launch stages. We find SurveyMonkey really useful to run surveys and use it extensively to get feedback from users and it lets you design a series of questions and then collect, analyse and download the results in a way that can be easily analysed.
One of the main techniques used to plan out what your website is going to look like when it is built. We’ve made extensive use of wireframes for the home page and sub-pages within the new site. I think they are essential to help to visualise what the site will look like, but I am aware that some people find it hard to visualise what the website will look like from a wireframe and want to see something that looks much more like a prototype website.
W is for … Workshops
We found that we used workshops extensively in the project. In the initial stages it was to help with user requirements and information architecture. We’ve made a lot of use of them to help with the work around arranging the subject categories and subject resources. They can be quite time-consuming to setup, run and particularly to analyse the results, but they have the distinct advantage of being a great way of getting people engaged with the project and creating new ideas.
A comment from someone in a meeting last week that on one of the websites mobile traffic was now 10% of all traffic sent me off to Google Analytics to check the latest position with our main website. We’ve certainly seen a big year on year growth in mobile use, 2010 saw 8 times the number of visits from mobile devices that in 2009. This year it looks like doubling. But still mobile use is around 2% of visits rather than 10%.
Although we do have a mobile website version it hasn’t been promoted heavily and even though it automatically detects mobile devices and directs users automatically to a mobile interface it considers iphones and ipads to be suitably internet capable to be directed to the standard website interface rather than a cut down version.
Digging a bit deeper into the analytics shows that ipad usage is now 50% of what Google Analytics classes as ‘mobile’ use (up from 38% last year). Based on the first two months of this year ipad usage looks to be up by three times, while non-ipad moble use looks to be increasing by about 20%. Whilst we are working on a new mobile version of the drupal website we aren’t planning an ipad app version.
What intrigues me is whether ipads really are mobile devices for websites. The safari browser is perfectly functional (flash inabilities notwithstanding), and although some sites direct you to mobile versions (or like google docs give you the option) it’s a purpose built internet browsing machine. This year there are dozens of tablet-type devices being launched with a variety of different operating systems. iPads already seem to be coming up as the ‘mobile’ device most likely to be using our website, internal use plays a part in that. So it implies for me that we need to be a bit more selective in how we define mobile use (and maybe so should Google Analytics) and split the mobile category into tablet use of the full website and mobile use of the mobile version.
In the middle of last year we changed the terminology on one of the main navigation sections on the website. The Help and Support section of the website contains a large amount of the content of the website. The reasons behind the change were several. The tab was getting quite low levels of usage and informal feedback seemed to be showing that users were a little confused about the purpose of the section. So we changed it to Help which benchmarking against other similar sites seemed to be the most common term that was used. Ideally we would have A/B tested the two versions, but we tend to stay away from having different versions as they can cause their own support headaches. So our question is – What difference has changing the terminology made to usage of this section?
We settled on using four sets of data from Google Analytics and looked at the last six months of 2010 compared with the equivalent period of 2009. The four pieces of data we decided to use were:
- the percentage of clicks on the Help and Support/Help tab on the home page – using the beta In-Page Analytics tool against the home page;
- the percentage of total site page views represented by the page views of the Help/Help and Support home page – by comparing the site page views with a filtered version of Top Content to look at the Help home page;
- the page views of the whole Help/Help and Support section as a percentage of the whole site page views – using a similar filter that includes all the help content; and finally
- the percentage of users of the help home page that come from the home page – using the Navigation Summary in the Content section
Looking at those four pieces of data gave us some results that showed that across all those measures use was down by small amounts. The percentage change varied a bit between each of the measures but there was a distinct reduction in the users accessing that section.
Oddly there doesn’t seem to be any evidence that people are finding it harder to find the help they need – we don’t seem to have more telephone enquiries or people saying they can’t find the help they need on the website. So it’s some more evidence to fit into our redevelopment of the website.
One of the problems with the data is that we can’t be sure that there has been a change in behaviour that would have happened anyway and isn’t to do with the website terminology change. On reflection we should have A/B tested the change so we would know that the data was being collected at the same time. And we need to think some more about how to apply what analytics is telling us.