You are currently browsing the category archive for the ‘website’ category.
It was great to see this week that the latest opportunity on the Jisc Elevator website is one for students to pitch ideas about new technology ideas. That’s really nice to see something that involves students in coming up with ideas and backing it up with a small amount of money to kickstart things.
Using students as co-designers for library services and in particularly in relation to websites and technology is something that I’m finding more and more compelling. A lot of the credit for that goes to Matthew Reidsma from Grand Valley State University in US, whose blog ‘Good for whom?‘ is pretty much essential reading if you’re interested in usability and improving the user experience. I’m starting to see that getting students involved in co-designing services is the next logical step on from usability testing. So instead of a process where you design a system and then test it on users, you involve them from the start, by asking them what they need, maybe then getting them to feedback on solution designs and specifications and then going through the design process of prototyping, testing and iterating, by getting them to look at every stage. Something that an agile development methodology particularly lends itself to. Examples where people have started to employ students on the staff to help with getting that student ‘voice’ are also starting to appear.
There are some examples of fairly recent projects where Universities have been getting students (and others outside the institution) involved in designing services, so for example the Collaborate project at Exeter that looked at using students and employers to design ‘employability focussed assessments’. There is also Leeds Metropolitan with their PC3 project on the personalised curriculum and Manchester Metropolitan’s ‘Supporting Responsive Curricula’ project. And you can add to that list of examples the Kritikos project at Liverpool that I blogged about recently.
For us, with our focus on websites and improving the user experience we’ve been working with a group of students to help us with designing some tools for a more personalised library experience. I blogged a bit about it earlier in the year. We’re now well into that programme of work and have put together a guest blog post for Jisc’s LMS Change project blog ‘Personalisation at the Open University’. Thanks to Ben Showers from Jisc and Helen Harrop from the LMS Change project for getting that published. Credit for the work on this (and the text for the blog post) should go to my colleagues: Anne Gambles, Kirsty Baker and Keren Mills. Having identified some key features to build we are well into getting the specification for the work finalised and start building the first few features soon. It’s been an interesting first foray into working with students as co-designers and one I think has major potential for how we do things in the future.
Reading through Lown, Sierra and Boyer’s article from ACRL on ‘How Users Search the Library from a Single Search Box’ based on their work at NCSU, started me thinking about looking at some data around how people are using the single search box that we have been testing at http://www.open.ac.uk/libraryservices/beta/search/.
About three months or so ago we created a prototype tool that pulls together results from the Discovery product we use (EBSCO Discovery) alongside results from the resources database that we use to feed the Library Resources pages on the library website, and including pages from the library website. Each result is shown in a box (ala ‘bento box’) and they are just listed down the screen, with Exact Title Matches and Title Matches being shown at the top, followed by a list of Databases, Library Pages, Ebooks, Ejournals and then articles from EBSCO Discovery. It was done in a deliberately simple way without lots of extra options to manipulate or refine the lists so we could get some very early views about how useful it was as an approach.
Looking at the data from Google Analytics, we’ve had just over 2,000 page views over the three months. There’s a spread of more than 800 different searches with the majority (less than 10%) being repeated fewer than 6 times. I’d suspect that most of those repeated terms are ones where people have been testing the tool.
The data also allows us to pick up when people are doing a search and then choosing to look at more data from one of the ‘bento boxes’, effectively they do this by applying a filter to the search string, e.g. (&Filter=EBOOK) takes you to all the Ebook resources that match your original search term. So 160 of the 2,000 page views were for Ebooks (8%) and 113 f0r Ejournals (6%) for example.
When it comes to looking at the actual search terms then they are overwhelmingly ‘subject’ type searches, with very few journal articles or author names in the search string. There are a few more journal or database names such as Medline or Web of Science But otherwise there is a very wide variety of search terms being employed and it very quickly gets down to single figure frequency. The wordle word cloud at the top of the page shows the range of search terms used in the last three months.
We’ve more work to do to look in more detail about what people want to do but being able to look at the search terms that people use and see how they filter their results is quite useful. Next steps are to do a bit more digging into Google Analytics to see what other useful data can be gleaned about what users are doing in the prototype.
Infographics and data visualisations seem to be very popular at the moment and for a while I’ve been keeping an eye on visual.ly as they have some great infographics and data visualisations. One of the good things about the visual.ly infographics is that there is some scope to customise them. So for example there is one about the ‘Life of a hashtag’ that you can customise and several others around facebook and twitter that you can use.
I picked up on twitter the other week that they had just brought out a Google Analytics infographic. That immediately got my interest as we make a lot of use of GA. You just point it to your site through your Google Analytics account and then get a weekly email ‘Your weekly insights’ created dynamically from your Google Analytics data.
It’s a very neat idea and quite a useful promotional tool to give people a quick snapshot of what is going on. So you get Pageviews over the past three weeks, what the trends are for New and Returning Visitors and reports on Pages per visit and Time on site and how that has changed in the past week.
It’s quite useful for social media traffic showing how facebook and twitter traffic has changed over the past week and as these types of media are things that you often want quite quick feedback on it is a nice visual way of being able to show what difference a particular activity might have had.
Obviously as a free tool, there’s a limit to the customisation you can do. So it might be nice to have visits or unique visitors to measure change in use of the site, or your top referrals, or particular pages that have been used most frequently. The time period is something that possibly makes it less useful for me in that I’m more likely to be want to compare against the previous month (or even this month last year). But no doubt visual.ly would build a custom version for you if you wanted something particular.
But as a freely available tool it’s a useful thing to have. The infographic is nicely presented and gives a visually appealing presentation of analytics data that can often be difficult to present to audiences who don’t necessarily understand the intricacies of web analytics.
The Google Analytics Visual.ly infographic is at https://create.visual.ly/graphic/google-analytics/
New tools concept
Earlier in the week we soft-launched a new section on our library website. The New Tools section is a space where we can put out new ideas with the aim of trying to get some feedback about whether users will find them useful. This parallels the work we’re also doing with a group of students from our Student Panel to work with them to design some new features (blogged about earlier in the week).
Our idea is that we’d use the New Tools section to put up beta tools based on ideas that have come up through a number of ways. So the ideas that come through the personalisation study work with students will go through a private ‘alpha’ stage where they help with defining the ideas and feeding back on paper prototypes and ‘proof-of-concept’ tools. Once the tools have been refined the best ones get released as ‘beta’ versions through the New Tools section. We’d also look at releasing as beta tools some of the ideas that come from other work we’ve done in the past such as in the RISE recommender project and other ideas we’ve come up with.
The idea with the New tools section is that the tools aren’t fully supported but are there for people to try and let us know what they think about them. If they work then we can refine them and take them into service. If they aren’t useful then we’ll have a better idea of what people want and what they don’t.
First new tools – single search box
The first two tools that we’ve made available in beta are both around library resources. The first one is a single search box (I”ve written before about the library quest for the google-like search box – and I’m starting to get more interested in the Google-like search box actually being Google and that libraries might be better concentrating on helping users in Google find library resources that they are entitled to access – but Google’s decision to ‘retire’ Google Reader certainly gives me pause in relying too much on something from Google). Behind the search box is a search that passes your search string to our version of EBSCO Discovery (using their API) and also to the library resources database that powers the resource lists that are fed into the library website. The idea behind this is that it will bring together results from our various systems into one place and particularly that it will be better at finding Journal titles that are direct matches.
This single search box is designed to also test the feasibility of bringing together different search results into a single interface. It’s a bit federated-search-like in that the results are presented in separate boxes (sort of like a stacked bento-box approach inspired by Stanford and others – it’s interesting also to see the approach that Princeton have taken with their beta version of their library website). We also haven’t strayed too much into the area of adding some of the surrounding functionality (saving citations, sharing etc features) that a fully-fledged system would need. This is just about testing whether pulling together these results is a workable and useful thing to do.
First new tools – My recent resources
The second tool is about trying to see if giving users access to a list of library resources they have recently accessed is useful to them. If you’re not an OU user (or aren’t signed-in) you’ll just see a demonstration list of resources. But if you are signed-in you should see a list of the resources you’ve used, with the most recent ones first. These resources will include ones you’ve looked at directly from the library website, or ones articles that you’ve viewed through our One Stop search discovery system. For this prototype we have offered RSS and RIS formats to export your records so you can put them into your favourite reference management tool. We’ve also included a box on the right to list your most used resources, with the number of times in brackets.
The format and description of the entries just picks up the standard format we already use on the library website and we’ve started to add in book covers for ebooks (although that gets me thinking that I’ve never really worked out what the point is of a book cover for an ebook anyway – Kindles seem to take you to the start of the book, not to the cover, so maybe ebook covers aren’t that relevant anymore – but in any case it breaks up the blocks of text neatly).
The plan is to develop more prototypes and build up a pool of tools in this space that we can get people to look at and comment on. Hopefully it will be useful to people,
Although I’d picked up the growth of mobiles and tablets overtaking sales of desktop PCs and laptops, one thing that hadn’t become obvious to me was that we now seem to be approaching the time when the number of tablets/smartphones in circulation outnumbers the numbers of desktops/laptops. December’s Internet Trends survey from Kleiner Perkins Caufield Byers shows, in the graph reproduced here, that they’d expect that stage to be reached globally sometime this year.
Although I’d probably have a couple of caveats about smartphone adoption in the developing world slightly skewing the figures, and whether people might ordinarily have more tablets/smartphones than desktops/laptops, it nonetheless emphasises the point that mobile internet access is now mainstream. For many people it may be their preferred means of accessing your services and their expectation is going to be that it should just work, and give an equivalent or better experience than the ‘traditional’ desktop browser experience.
But numbers of devices doesn’t yet map to the amount of usage of our websites. For us our traffic is still under 10% from mobiles/tablets, so even if the numbers of devices in circulation is reaching parity, we aren’t yet at a stage where the majority of our use is coming from those devices. But looking at the trends, that day is on the horizon maybe.
One of the interesting concepts in KPCB’s slideshow is the ‘asset-light’ idea. The idea that more and more people, perhaps younger people especially, may be less inclined to wanting to own or acquire physical ‘stuff’ and have a more ‘mobile’ (as in being able to move more readily) lifestyle. Characterised as having your music on spotify or iTunes rather than on physical CDs, or renting rather than buying your textbooks. It also has in mind for me a personal version of the concept of ‘Just-in-time’ the production strategy based around reducing inventory in favour of delivery of items when you need them. It’s the concept of ‘on-demand’ rather than ownership ‘just-in-case’.
Potentially, as characterised in this blogpost on Fail!lab it might mean major changes to our library websites, or even the concept of websites. It’s a good and interesting thought. For a while we’ve certainly been pushing content into places where students go, such as pushing library resources via RSS feeds into our VLE. But these spaces are still websites. Yet once you’ve got a stream or feed of data then you could push or pull it into numerous places, whether apps or webpages or systems.
The idea in the Fail!Lab blogpost around Artificially intelligent agents doing the ‘heavy-lifting’ of finding resources for users is something that Paul Walk raised as part of his Library Management Systems vision (slideshare and blog post) so it’s interesting to see someone else postulating a similar future. For me it starts to envisage a future where users choose their environment/tools/agents and we build systems that are capable of feeding data/content to those agents and are built to a set of data sharing standards. It suggests a time where users are able to write queries to interrogate your systems, whether for content or for help materials or skills development activity, and implies a world of profiles, entitlements and charging mechanisms that are a world away from the current model of – go to this website, signup and pass through the gateway into a ‘library’ of stuff.
“Benchmarking is the process of comparing one’s business processes and performance metrics to industry bests or best practices from other industries. Dimensions typically measured are quality, time and cost. In the process of benchmarking, management identifies the best firms in their industry, or in another industry where similar processes exist, and compare the results and processes of those studied (the “targets”) to one’s own results and processes. In this way, they learn how well the targets perform and, more importantly, the business processes that explain why these firms are successful.” http://en.wikipedia.org/wiki/Benchmarking
It’s easy enough to describe what benchmarking is, but the critical question it seems to me is who do you benchmark against, particularly when what you are benchmarking, is a web-based experience. Is it enough to benchmark against organisations who are competing for your customers directly in the market in which you operate, or do you need to look more widely? For comparability you can argue that only those organisations who are in the same business as you are offer a way of directly benchmarking what you do with what the best the competition can offer. And yes, I’d agree with that.
But I’d argue that you are also competing more generally with a wider group of comparators, in that you are competing for your customers (or potential customers) time and attention, and I think that you are competing on reputation with the best examples who are operating in the channel (i.e. the web) that you are using. And I feel that that argues for a wider range of benchmarking comparators.
So what groups would I expect us to benchmark against?:
- libraries in distance learning institutions who might be offering a similar set of services to us, both direct competitors in our own market, but also those in other markets
- wider HE libraries – even campus-based, will all be offering an online experience – it might be additional to their location based service, and some of their services won’t be relevant – but may still have valuable lessons – and these again would not just be local competitors but would be from across the world
- sector organisations and service providers – these could be the best of cultural organisations such as museums, or service providers such as discovery system providers or content providers or other organisations in the sector
- commercial service sector providers – online shopping and online supermarkets, concierge-style services, other online public services and commercial services – all are competing for attention and define what an online experience should be like
- social, communications, media systems and organisations – news organisations for example. But these types of websites are often good examples of best practice and also environments where our users will spend a great deal of time, influencing their perception of what makes a great website experience.
So there’s quite a range of different types of organizations and websites that I’d want to look at to see what we could learn about how to make our website better. In the same way as the hotel industry has influenced the concept of boutique libraries, then there are lessons that we can learn from other sectors that will help. So for example, concepts around using recommendations for library resources can draw on practices from websites such as Amazon.
There’s one final thought about benchmarking, in that the most important group to ‘benchmark’ against is your own customers. What are their expectations? You may have a list of good ideas that come out of your benchmarking exercise, but which of them would your customers prioritise?
Search radio buttons
I’ve been looking at search logs again to see what impact placing Keyword, Author and Title radio buttons beneath our One stop search box on the home page of the library website has on user search behaviour. (One stop search is the name we’ve given to EBSCO Discovery Search).
The search terms listed in the log file allow us to see the search terms entered in the box and identify whether the Title or Author radio button options were chosen. For the sample file 12% of the searches were title (TI+) and 10% were author (AU+). This leaves a large majority of 78% that just left the default setting of Keyword for their search.
There is at least one example where a keyword search for what is likely to be an author’s name is followed up by an author search for that name. Even though it isn’t a particularly common name you get very different search results from One stop search so that implies to me that there is some real value in having the radio buttons present.
Amongst the search terms are a couple of examples of things where we need to think about how we could best help the user. There are a couple of examples of university course codes, one of which is looking for a specific unit in the course. It’s difficult to know what would be of most help here. It probably isn’t useful for them to see that we might have a copy of the course book in the library here. Are they on that course? Might they want a link to that course? or are they looking for resources relevant to that course/unit?, so show them a list of relevant resources from a reading/resource list.
The other area is where the user looks to be trying to find a database or journal rather than an article. Using the title radio button seems to be a definite advantage in getting the title shown fairly prominently in the results but it can still be a bit hit and miss, particularly for titles that aren’t particularly unique.
This time I’ve tried a different tool to look at analysing the text for the search terms. There have been changes to the TAPoRware text analysis tool that I blogged about a while ago and there are some new beta tools such as Voyant tools and particularly Cirrus. This text analysis tool includes a word cloud tool, used for the picture at the top of this blog post. It includes an optional (and editable) stop words list to remove them from the word cloud. There are also a range of tools such as analysing the frequency of words in the text. To access the tools you click through the words in the word cloud which is a neat approach. It looks like a nice and useful set of tools. Information about the tools can be found at http://hermeneuti.ca/
I was interested to see that Stanford University Library’s new website now has a Search Everything tabbed search box. Using the Search Everything feature you get search results drawn from several sources: Books & media (top 3 results), Databases (again top 3), Articles & e-resources (using their own xsearch and Google Scholar) and website results (top 5 results). All these results are shown together on a single page in two columns and they’ve made a really neat presentation job of showing those results.
Pulling in a Library website search has the added benefit of including information about relevant subject librarians on to appropriate search results pages. If you want to see further results links take you off to the other systems and it will be interesting to see what the user reaction is to the tool and the way results are presented.
It’s nice to see this approach being tried out and it parallels some of our thinking in trying to draw together results from several different library systems and showing them on the same results screen. How you display those results in a way that makes sense to users is a key thing. Stanford’s approach is to show a small number of results which in some ways is the opposite of the discovery system approach. I’ve certainly heard of comments from users that they can find discovery systems overwhelming in terms of how many results they see. And that seems to me to suggest that relevance ranking across our content may be a really critical factor here. Showing a small number of results is fine, if there’s a good chance that the results that users want, are going to be at the top of the list.
Top ten trends in academic libraries
Catching up with reading after a few days away led me (via a RT from @benshowers) to ACRL’s latest article on ’2012 top ten trends in academic libraries’. (ACRL are the US Association of College and Research Libraries and part of the American Library Association). It’s an interesting list:
Communicating value; Data curation; Digital preservation; Higher education; Information technology; mobile environments; Patron driven e-book acquisition; Scholarly communication; Staffing; and, User behaviors and expectations.
Some are obvious, IT, mobiles, the changing nature of higher education. But I find it quite interesting that user behavior and expectations is flagged up as a top ten trend. Driven in part maybe by increased expectations as the cost of higher education to the student continues to rise, but also by our students being better informed consumers of online information. Their experience of library search (for example here in this blog post by @carolgauld) contrasts markedly with their experiences of the web, through online shopping and social networking. And it’s a big challenge for libraries and publishers. All too often it seems that library systems are built with librarians or researchers in mind rather than users.
I also find it interesting that getting across library value is a top trend and that seems again to be something that libraries always struggle with. It’s timely that ACRL have their White Paper ‘Connect, Collaborate and Communicate: A report from the Value of Academic Libraries Summits’ out now. That includes material from Carol Tenopir’s work that I was fortunate to hear about first hand last year. Top of the recommendations is about ensuring that librarians understand how libraries contribute to student learning and success. Work such as Huddersfield’s Library Impact Data Project are demonstrating that there is a connection between library usage and attainment and it’s important that libraries get involved within their institution to make sure that library data is contributed to ‘data warehouses’ and other management information systems so library use is taken account of when measuring student achievement.
Two further things in the list stand out for me: Data curation and Digital Preservation. Mainly because it’s an area I’m becoming more involved with as we plan and build our new Digital Library (www.open.ac.uk/blogs/OUDL/) , but also because it seems to me that a lot of library time is being spent (and going to be spent) in this area of work. Although there’s clearly a step between managing collections of physical items (books and documents) to managing collections of digital items, there’s a sense to me that curation of stuff the library owns, is a more ‘comforting’ space for libraries to operate in. Handling access to stuff we license (the subscribed resources world) starts to seem like a different type of activity maybe, a ‘blip’ on the landscape of libraries as curators of collections of stuff the library owns?