You are currently browsing the monthly archive for September 2012.

Photograph of documents from ALTCA quick trip to Manchester yesterday to take part in a Symposium at ALT-C  on ‘Big Data and Learning Analytics’ with colleagues from the OU (Simon Buckingham Shum, Rebecca Ferguson, Naomi Jeffrey and Kevin Mayles) and Sheila MacNeill from JISC CETIS (who has blogged about the session here).

It was the first time I’d been to ALT-C and it was just a flying visit on the last morning of the conference so I didn’t get the full ALT-C experience.  But I got the impression of a really big conference, well-organised and with lots of different types of sessions going on.  There were 10 sessions taking place at the time we were on, including talks from invited speakers.  So lots of choice of what to see.

But we had a good attendance at the session and there seemed a good mix of people and a good debate and questions during the symposium.  Trying to both summarise an area like Learning Analytics and also give people an idea of the range of activities that are going on is tricky in a one-hour symposium but hopefully gave enough of an idea of some of the work taking place and some of the issues and concerns that there are.

Cross-over with other areas
Sheila had a slide pointing out the overlaps between the Customer Relationship Management systems world, Business Intelligence and Learning Analytics, and it struck me that there’s also another group in the Activity Data world that crosses over.  Much of the work I mentioned (RISE and Huddersfield’s fanstastic work on Library impact)  came out of JISC’s Activity Data funding stream and some of the synthesis project work has been ‘synthesised’ into a website ‘Exploiting activity data in the academic environment’ http://www.activitydata.org/ Many of the lessons learnt that are listed here, particularly around what you can do with the data, are equally relevant to Learning Analytics.  JISC are also producing an Activity Data report in the near future.

Interesting questions
A lot of the questions in the session were as much around the ethics as the practicality.   Particularly interesting was the idea that there were risks of Learning Analytics in encouraging a view that so much could be boiled down to a set of statistics, which sounded a bit like norms to me. The sense-making element seems to be really key, as with so much data and statistics work.

I’d talked a bit about also being able to use the data to make recommendations, something we had experimented with in the RISE project. It was interesting to hear views about the dangers of them reducing rather than expanding choice by narrowing the choices as people are encouraged to select from a list of recommendations which reinforces the recommendations leading to a loop.  If you are making recommendations based on what people on a course looked at then I’d agree that it is a risk, especially as I think there’s a huge probability that people are often going to be looking at resources that they have to look at for their course anyway.

When it comes to other types of recommendations (such as people looking at this article also viewed this other article, and people searching for this search term look at these items) then there is still some chance of recommendations reinforcing a narrow range of content, but I’d suggest that there is still some chance of serendipitous discovery of material that you might not ordinarily have seen.  I’m aware that we’ve very much scratched the surface with recommendations and used simple algorithms that were designed around the idea that the more people who viewed that pattern the better the recommendation.  But it may be that more complex algorithms that throw in some ‘randomness’ might be useful.

One of the elements I think that is useful about the concept of recommendations is that people largely accept them (and perhaps expect them) as they are ubiquitous in sites like Amazon.  And I wonder if you could almost consider them as a personalisation feature that indicates that your service is modern and up-to-date and is engaging with users.  For many library systems that still look to be old-fashioned and ‘librarian’-orientated then perhaps it is equally important to be seen to have these types of features as standard.

Update: Slides from the introductory presentation are here

Search sample wordcloud using Voyant toolsSearch radio buttons
I’ve been looking at search logs again to see what impact placing Keyword, Author and Title radio buttons beneath our One stop search box on the home page of the library website has on user search behaviour.  (One stop search is the name we’ve given to EBSCO Discovery Search). one stop search with radio buttons screenshot

The search terms listed in the log file allow us to see the search terms entered in the box and identify whether the Title or Author radio button options were chosen.  For the sample file 12% of the searches were title (TI+) and 10% were author (AU+).  This leaves a large majority of 78% that just left the default setting of Keyword for their search.

There is at least one example where a keyword search for what is likely to be an author’s name is followed up by an author search for that name.  Even though it isn’t a particularly common name you get very different search results from One stop search so that implies to me that there is some real value in having the radio buttons present.

Amongst the search terms are a couple of examples of things where we need to think about how we could best help the user.  There are a couple of examples of university course codes, one of which is looking for a specific unit in the course.  It’s difficult to know what would be of most help here.   It probably isn’t useful for them to see that we might have a copy of the course book in the library here.  Are they on that course?  Might they want a link to that course?  or are they looking for resources relevant to that course/unit?, so show them a list of relevant resources from a reading/resource list.

The other area is where the user looks to be trying to find a database or journal rather than an article.  Using the title radio button seems to be a definite advantage in getting the title shown fairly prominently in the results but it can still be a bit hit and miss, particularly for titles that aren’t particularly unique.

Voyant tools
Voyant tools screenshot
This time I’ve tried a different tool to look at analysing the text for the search terms.  There have been changes to the TAPoRware text analysis tool that I blogged about a while ago and there are some new beta tools such as Voyant tools and particularly Cirrus.  This text analysis tool includes a word cloud tool, used for the picture at the top of this blog post.  It includes an optional (and editable) stop words list to remove them from the word cloud.  There are also a range of tools such as analysing the frequency of words in the text.  To access the tools you click through the words in the word cloud which is a neat approach.  It looks like a nice and useful set of tools.  Information about the tools can be found at http://hermeneuti.ca/

I was interested to see that Stanford University Library’s new website now has a Search Everything tabbed search box. Stanford University Library websiteUsing the Search Everything feature you get search results drawn from several sources: Books & media (top 3 results), Databases (again top 3), Articles & e-resources (using their own xsearch and Google Scholar) and website results (top 5 results).  All these results are shown together on a single page in two columns and they’ve made a really neat presentation job of showing those results.

Pulling in a Library website search has the added benefit of including information about relevant subject librarians on to appropriate search results pages.  If you want to see further results links take you off to the other systems and it will be interesting to see what the user reaction is to the tool and the way results are presented.

It’s nice to see this approach being tried out and it parallels some of our thinking in trying to draw together results from several different library systems and showing them on the same results screen.  How you display those results in a way that makes sense to users is a key thing.  Stanford’s approach is to show a small number of results which in some ways is the opposite of the discovery system approach.  I’ve certainly heard of comments from users that they can find discovery systems overwhelming in terms of how many results they see.  And that seems to me to suggest that relevance ranking across our content may be a really critical factor here.  Showing a small number of results is fine, if there’s a good chance that the results that users want, are going to be at the top of the list.

Twitter posts

Categories

Calendar

September 2012
M T W T F S S
« Aug   Oct »
 12
3456789
10111213141516
17181920212223
24252627282930

Creative Commons License