You are currently browsing the category archive for the ‘website’ category.
Infographics and data visualisations seem to be very popular at the moment and for a while I’ve been keeping an eye on visual.ly as they have some great infographics and data visualisations. One of the good things about the visual.ly infographics is that there is some scope to customise them. So for example there is one about the ‘Life of a hashtag’ that you can customise and several others around facebook and twitter that you can use.
I picked up on twitter the other week that they had just brought out a Google Analytics infographic. That immediately got my interest as we make a lot of use of GA. You just point it to your site through your Google Analytics account and then get a weekly email ‘Your weekly insights’ created dynamically from your Google Analytics data.
It’s a very neat idea and quite a useful promotional tool to give people a quick snapshot of what is going on. So you get Pageviews over the past three weeks, what the trends are for New and Returning Visitors and reports on Pages per visit and Time on site and how that has changed in the past week.
It’s quite useful for social media traffic showing how facebook and twitter traffic has changed over the past week and as these types of media are things that you often want quite quick feedback on it is a nice visual way of being able to show what difference a particular activity might have had.
Obviously as a free tool, there’s a limit to the customisation you can do. So it might be nice to have visits or unique visitors to measure change in use of the site, or your top referrals, or particular pages that have been used most frequently. The time period is something that possibly makes it less useful for me in that I’m more likely to be want to compare against the previous month (or even this month last year). But no doubt visual.ly would build a custom version for you if you wanted something particular.
But as a freely available tool it’s a useful thing to have. The infographic is nicely presented and gives a visually appealing presentation of analytics data that can often be difficult to present to audiences who don’t necessarily understand the intricacies of web analytics.
The Google Analytics Visual.ly infographic is at https://create.visual.ly/graphic/google-analytics/
New tools concept
Earlier in the week we soft-launched a new section on our library website. The New Tools section is a space where we can put out new ideas with the aim of trying to get some feedback about whether users will find them useful. This parallels the work we’re also doing with a group of students from our Student Panel to work with them to design some new features (blogged about earlier in the week).
Our idea is that we’d use the New Tools section to put up beta tools based on ideas that have come up through a number of ways. So the ideas that come through the personalisation study work with students will go through a private ‘alpha’ stage where they help with defining the ideas and feeding back on paper prototypes and ‘proof-of-concept’ tools. Once the tools have been refined the best ones get released as ‘beta’ versions through the New Tools section. We’d also look at releasing as beta tools some of the ideas that come from other work we’ve done in the past such as in the RISE recommender project and other ideas we’ve come up with.
The idea with the New tools section is that the tools aren’t fully supported but are there for people to try and let us know what they think about them. If they work then we can refine them and take them into service. If they aren’t useful then we’ll have a better idea of what people want and what they don’t.
First new tools – single search box
The first two tools that we’ve made available in beta are both around library resources. The first one is a single search box (I”ve written before about the library quest for the google-like search box – and I’m starting to get more interested in the Google-like search box actually being Google and that libraries might be better concentrating on helping users in Google find library resources that they are entitled to access – but Google’s decision to ‘retire’ Google Reader certainly gives me pause in relying too much on something from Google). Behind the search box is a search that passes your search string to our version of EBSCO Discovery (using their API) and also to the library resources database that powers the resource lists that are fed into the library website. The idea behind this is that it will bring together results from our various systems into one place and particularly that it will be better at finding Journal titles that are direct matches.
This single search box is designed to also test the feasibility of bringing together different search results into a single interface. It’s a bit federated-search-like in that the results are presented in separate boxes (sort of like a stacked bento-box approach inspired by Stanford and others – it’s interesting also to see the approach that Princeton have taken with their beta version of their library website). We also haven’t strayed too much into the area of adding some of the surrounding functionality (saving citations, sharing etc features) that a fully-fledged system would need. This is just about testing whether pulling together these results is a workable and useful thing to do.
First new tools – My recent resources
The second tool is about trying to see if giving users access to a list of library resources they have recently accessed is useful to them. If you’re not an OU user (or aren’t signed-in) you’ll just see a demonstration list of resources. But if you are signed-in you should see a list of the resources you’ve used, with the most recent ones first. These resources will include ones you’ve looked at directly from the library website, or ones articles that you’ve viewed through our One Stop search discovery system. For this prototype we have offered RSS and RIS formats to export your records so you can put them into your favourite reference management tool. We’ve also included a box on the right to list your most used resources, with the number of times in brackets.
The format and description of the entries just picks up the standard format we already use on the library website and we’ve started to add in book covers for ebooks (although that gets me thinking that I’ve never really worked out what the point is of a book cover for an ebook anyway – Kindles seem to take you to the start of the book, not to the cover, so maybe ebook covers aren’t that relevant anymore – but in any case it breaks up the blocks of text neatly).
The plan is to develop more prototypes and build up a pool of tools in this space that we can get people to look at and comment on. Hopefully it will be useful to people,
Although I’d picked up the growth of mobiles and tablets overtaking sales of desktop PCs and laptops, one thing that hadn’t become obvious to me was that we now seem to be approaching the time when the number of tablets/smartphones in circulation outnumbers the numbers of desktops/laptops. December’s Internet Trends survey from Kleiner Perkins Caufield Byers shows, in the graph reproduced here, that they’d expect that stage to be reached globally sometime this year.
Although I’d probably have a couple of caveats about smartphone adoption in the developing world slightly skewing the figures, and whether people might ordinarily have more tablets/smartphones than desktops/laptops, it nonetheless emphasises the point that mobile internet access is now mainstream. For many people it may be their preferred means of accessing your services and their expectation is going to be that it should just work, and give an equivalent or better experience than the ‘traditional’ desktop browser experience.
But numbers of devices doesn’t yet map to the amount of usage of our websites. For us our traffic is still under 10% from mobiles/tablets, so even if the numbers of devices in circulation is reaching parity, we aren’t yet at a stage where the majority of our use is coming from those devices. But looking at the trends, that day is on the horizon maybe.
One of the interesting concepts in KPCB’s slideshow is the ‘asset-light’ idea. The idea that more and more people, perhaps younger people especially, may be less inclined to wanting to own or acquire physical ‘stuff’ and have a more ‘mobile’ (as in being able to move more readily) lifestyle. Characterised as having your music on spotify or iTunes rather than on physical CDs, or renting rather than buying your textbooks. It also has in mind for me a personal version of the concept of ‘Just-in-time’ the production strategy based around reducing inventory in favour of delivery of items when you need them. It’s the concept of ‘on-demand’ rather than ownership ‘just-in-case’.
Potentially, as characterised in this blogpost on Fail!lab it might mean major changes to our library websites, or even the concept of websites. It’s a good and interesting thought. For a while we’ve certainly been pushing content into places where students go, such as pushing library resources via RSS feeds into our VLE. But these spaces are still websites. Yet once you’ve got a stream or feed of data then you could push or pull it into numerous places, whether apps or webpages or systems.
The idea in the Fail!Lab blogpost around Artificially intelligent agents doing the ‘heavy-lifting’ of finding resources for users is something that Paul Walk raised as part of his Library Management Systems vision (slideshare and blog post) so it’s interesting to see someone else postulating a similar future. For me it starts to envisage a future where users choose their environment/tools/agents and we build systems that are capable of feeding data/content to those agents and are built to a set of data sharing standards. It suggests a time where users are able to write queries to interrogate your systems, whether for content or for help materials or skills development activity, and implies a world of profiles, entitlements and charging mechanisms that are a world away from the current model of – go to this website, signup and pass through the gateway into a ‘library’ of stuff.
“Benchmarking is the process of comparing one’s business processes and performance metrics to industry bests or best practices from other industries. Dimensions typically measured are quality, time and cost. In the process of benchmarking, management identifies the best firms in their industry, or in another industry where similar processes exist, and compare the results and processes of those studied (the “targets”) to one’s own results and processes. In this way, they learn how well the targets perform and, more importantly, the business processes that explain why these firms are successful.” http://en.wikipedia.org/wiki/Benchmarking
It’s easy enough to describe what benchmarking is, but the critical question it seems to me is who do you benchmark against, particularly when what you are benchmarking, is a web-based experience. Is it enough to benchmark against organisations who are competing for your customers directly in the market in which you operate, or do you need to look more widely? For comparability you can argue that only those organisations who are in the same business as you are offer a way of directly benchmarking what you do with what the best the competition can offer. And yes, I’d agree with that.
But I’d argue that you are also competing more generally with a wider group of comparators, in that you are competing for your customers (or potential customers) time and attention, and I think that you are competing on reputation with the best examples who are operating in the channel (i.e. the web) that you are using. And I feel that that argues for a wider range of benchmarking comparators.
So what groups would I expect us to benchmark against?:
- libraries in distance learning institutions who might be offering a similar set of services to us, both direct competitors in our own market, but also those in other markets
- wider HE libraries – even campus-based, will all be offering an online experience – it might be additional to their location based service, and some of their services won’t be relevant – but may still have valuable lessons – and these again would not just be local competitors but would be from across the world
- sector organisations and service providers – these could be the best of cultural organisations such as museums, or service providers such as discovery system providers or content providers or other organisations in the sector
- commercial service sector providers – online shopping and online supermarkets, concierge-style services, other online public services and commercial services – all are competing for attention and define what an online experience should be like
- social, communications, media systems and organisations – news organisations for example. But these types of websites are often good examples of best practice and also environments where our users will spend a great deal of time, influencing their perception of what makes a great website experience.
So there’s quite a range of different types of organizations and websites that I’d want to look at to see what we could learn about how to make our website better. In the same way as the hotel industry has influenced the concept of boutique libraries, then there are lessons that we can learn from other sectors that will help. So for example, concepts around using recommendations for library resources can draw on practices from websites such as Amazon.
There’s one final thought about benchmarking, in that the most important group to ‘benchmark’ against is your own customers. What are their expectations? You may have a list of good ideas that come out of your benchmarking exercise, but which of them would your customers prioritise?
Search radio buttons
I’ve been looking at search logs again to see what impact placing Keyword, Author and Title radio buttons beneath our One stop search box on the home page of the library website has on user search behaviour. (One stop search is the name we’ve given to EBSCO Discovery Search).
The search terms listed in the log file allow us to see the search terms entered in the box and identify whether the Title or Author radio button options were chosen. For the sample file 12% of the searches were title (TI+) and 10% were author (AU+). This leaves a large majority of 78% that just left the default setting of Keyword for their search.
There is at least one example where a keyword search for what is likely to be an author’s name is followed up by an author search for that name. Even though it isn’t a particularly common name you get very different search results from One stop search so that implies to me that there is some real value in having the radio buttons present.
Amongst the search terms are a couple of examples of things where we need to think about how we could best help the user. There are a couple of examples of university course codes, one of which is looking for a specific unit in the course. It’s difficult to know what would be of most help here. It probably isn’t useful for them to see that we might have a copy of the course book in the library here. Are they on that course? Might they want a link to that course? or are they looking for resources relevant to that course/unit?, so show them a list of relevant resources from a reading/resource list.
The other area is where the user looks to be trying to find a database or journal rather than an article. Using the title radio button seems to be a definite advantage in getting the title shown fairly prominently in the results but it can still be a bit hit and miss, particularly for titles that aren’t particularly unique.
This time I’ve tried a different tool to look at analysing the text for the search terms. There have been changes to the TAPoRware text analysis tool that I blogged about a while ago and there are some new beta tools such as Voyant tools and particularly Cirrus. This text analysis tool includes a word cloud tool, used for the picture at the top of this blog post. It includes an optional (and editable) stop words list to remove them from the word cloud. There are also a range of tools such as analysing the frequency of words in the text. To access the tools you click through the words in the word cloud which is a neat approach. It looks like a nice and useful set of tools. Information about the tools can be found at http://hermeneuti.ca/
I was interested to see that Stanford University Library’s new website now has a Search Everything tabbed search box. Using the Search Everything feature you get search results drawn from several sources: Books & media (top 3 results), Databases (again top 3), Articles & e-resources (using their own xsearch and Google Scholar) and website results (top 5 results). All these results are shown together on a single page in two columns and they’ve made a really neat presentation job of showing those results.
Pulling in a Library website search has the added benefit of including information about relevant subject librarians on to appropriate search results pages. If you want to see further results links take you off to the other systems and it will be interesting to see what the user reaction is to the tool and the way results are presented.
It’s nice to see this approach being tried out and it parallels some of our thinking in trying to draw together results from several different library systems and showing them on the same results screen. How you display those results in a way that makes sense to users is a key thing. Stanford’s approach is to show a small number of results which in some ways is the opposite of the discovery system approach. I’ve certainly heard of comments from users that they can find discovery systems overwhelming in terms of how many results they see. And that seems to me to suggest that relevance ranking across our content may be a really critical factor here. Showing a small number of results is fine, if there’s a good chance that the results that users want, are going to be at the top of the list.
Top ten trends in academic libraries
Catching up with reading after a few days away led me (via a RT from @benshowers) to ACRL’s latest article on ’2012 top ten trends in academic libraries’. (ACRL are the US Association of College and Research Libraries and part of the American Library Association). It’s an interesting list:
Communicating value; Data curation; Digital preservation; Higher education; Information technology; mobile environments; Patron driven e-book acquisition; Scholarly communication; Staffing; and, User behaviors and expectations.
Some are obvious, IT, mobiles, the changing nature of higher education. But I find it quite interesting that user behavior and expectations is flagged up as a top ten trend. Driven in part maybe by increased expectations as the cost of higher education to the student continues to rise, but also by our students being better informed consumers of online information. Their experience of library search (for example here in this blog post by @carolgauld) contrasts markedly with their experiences of the web, through online shopping and social networking. And it’s a big challenge for libraries and publishers. All too often it seems that library systems are built with librarians or researchers in mind rather than users.
I also find it interesting that getting across library value is a top trend and that seems again to be something that libraries always struggle with. It’s timely that ACRL have their White Paper ‘Connect, Collaborate and Communicate: A report from the Value of Academic Libraries Summits’ out now. That includes material from Carol Tenopir’s work that I was fortunate to hear about first hand last year. Top of the recommendations is about ensuring that librarians understand how libraries contribute to student learning and success. Work such as Huddersfield’s Library Impact Data Project are demonstrating that there is a connection between library usage and attainment and it’s important that libraries get involved within their institution to make sure that library data is contributed to ‘data warehouses’ and other management information systems so library use is taken account of when measuring student achievement.
Two further things in the list stand out for me: Data curation and Digital Preservation. Mainly because it’s an area I’m becoming more involved with as we plan and build our new Digital Library (www.open.ac.uk/blogs/OUDL/) , but also because it seems to me that a lot of library time is being spent (and going to be spent) in this area of work. Although there’s clearly a step between managing collections of physical items (books and documents) to managing collections of digital items, there’s a sense to me that curation of stuff the library owns, is a more ‘comforting’ space for libraries to operate in. Handling access to stuff we license (the subscribed resources world) starts to seem like a different type of activity maybe, a ‘blip’ on the landscape of libraries as curators of collections of stuff the library owns?
Having read Matthiew Reidsma’s blog post recently on how the fold metaphor in web design doesn’t really exist I was intrigued to see that the latest version of Google’s In-page Analytics has introduced a ‘fold’ feature to show how much web page activity takes place below a certain point on the page. The ‘fold’ idea is connected to a design concept that essentially says that people only look at what they see immediately in front of them on a web page and that they don’t scroll up and down the screen.
In the latest version of Google Analytics In-Page Analytics you get an orange line that slides up and down the page to show how much activity takes place below that line. Because of the way that analytics handles traffic to external links by adding the traffic figures together it isn’t all that accurate a tool, but I find it is interesting that Google saw the need to introduce this sort of feature. Making the feature slide up and down looks like the thought was that you could use it as a tool to plan where you might put the most important content. But I’m not convinced that it is all that useful as the tool only moves up and down vertically, it doesn’t move from left to right. And critically for me it doesn’t really represent how your users viewed your content. To make the tool work I think I’d want to segment the users by people using a particular resolution and then look at the In-Page Analytics for that segment only. I need to do some investigation to see if segmenting people by screen resolution is feasible.
Thinking about screen resolutions made me check back to the Google Analytics data to see what screen resolutions people use to access one of our sites. While nearly 60% are using just four different screen resolutions from 1024 upwards there have been a total of 1,326 different screen resolutions in just three months. That seems to me to be an astonishing number but it’s probably a reflection of two things. Firstly that we are getting more people using mobile devices, both phones and tablets. Secondly I think it reflects the fact that our latest site has been designed to cope with a wide variety of screen resolutions (largely as a design feature to allow it to work on phones and tablets) and as a consequence if users want to resize their screen to pretty much any resolution they want, the content should reflow reasonably well.
The new version of our mobile search for our library website went live earlier in the week. This uses the Ebsco discovery API to access licensed resources. There’s a screenshot on the left as access is I’m afraid limited to OU students and staff. The new version owes a lot to the work of the developer on the MACON project and has been adapted by our library website developer (@beersoft). Access to be mobile version can be gained from a link on the bottom right of the desktop version or by autodection if you are already on a mobile device.
New features include showing the last five items that you have viewed as well as your last ten searches. These are features that are thought to be particularly useful for mobile users as the less time spent fiddling around with retyping URLs or search strings the better. The feature also includes access to an advanced search screen that allows Keyword, Author, title, Published after and Published before searches.
Search results appear within the interface with the search words highlighted. You can choose to have 10, 25 or 50 results per page. Links to the item take you to the EBSCO interface, or, if there is a DOI, to the publisher website via an EZProxy link. It looks like a nice step forward with the search system and it’s good when work that is strongly influenced by work from projects like MACON and RISE gets through into service.