You are currently browsing the category archive for the ‘tools’ category.
I think it was the quiet concentration that made the first impression on me. Going into a room where a group of library staff were busy carrying out a cognitive mapping exercise. Everyone was engrossed in the activity, heads down, trying out a technique that was new to most of them.
This was part of a different type of staff meeting – with @ and @d_r_jenkins running a great session on UX and ethnography and intoducing people to three different UX techniques: cognitive mapping, directed storytelling and love-letters/break-up letters. With only an hour to run the introduction and try out the techniques it was quite a short time, but enough to give people a flavour of the power of these approaches.
It’s been a bit of a journey to get to this point. About eighteen months ago we identified ethnographic techniques as being potentially immensely valuable and something we needed to know more about, experiment with and use as part of our UX practice. The UXLibs conferences and the presentations and blogs about the topic got us up to speed enough to see the potential and to start to talk to people here about it. Primarily we’ve been looking at the approaches from the perspective of how they can be used in our digital service development work around websites but the wider potential is clear. The Futurelib initiative at Cambridge has been really useful to demonstrate the potential of the techniques. So when the chance came to send some people to a UX day organised by a neighbouring institution with Andy Priestner (@andytraining) that was a great opportunity to spread knowledge about the techniques across the library.
We’re already using these techniques in online sessions with students looking at the future direction of our library websites as part of our digital services work. Our Research Support team are using them with research students in face-to-face sessions. And the session with library staff quickly brought up some other examples where people soon started to see other work where they could be used, in work with tutors maybe.
It was great to see such engagement and enthusiasm with the approach and really interesting to see the different maps that people drew in the cognitive mapping exercise. Given that we are a group of staff using a standard set of equipment (PCs, ipads for example) and tools it was remarkable how much variation there was in the maps. That gives a lot of food for thought for the digital capabilities project that is just getting started.
So, we’re at the start of a new project and I thought it was a useful time to reflect on the range of tools we’re using in the early stages of the project for collaboration and project management. These tools cover communication, project management, task management and bibliographic management.
For small projects we’re using the One Page Project Plan, an excel template from www.oppmi.com This uses a single excel worksheet to cover tasks, progress, responsibility and accountability and also some confidence measures about how the project is progressing. We’ve used this fairly consistently for two or three years for our small projects and people are pretty familiar with not only how to use them for projects but also how to read and interpret them. You can only really get about 25-30 tasks onto the OPPP, so it will be used to track activities at a relatively high level although we can reflect both the work-package level and some tasks within each work-package. Tasks are generally described in the past tense using words such as ‘completed’ or ‘developed’, so although it does give a reasonable overview of when activities are due to be happening there is less of an appreciation of the actual activities taking place in each time period. There’s a space on the page for a description of the status and that can be used to flag up what has been completed, or any particular issues. For bigger projects several OPPPs might be used, maybe with a high-level overarching version.
To organise and track the tasks in the project we’re using Trello. This openly available tool lets you create a Board for your project, and then arrange your tasks (each one termed a ‘card’) into groupings. So we’ve got several Phases for the project and then To Do, Doing and Done lists of tasks. You can add people to the cards and send out emails to people, set deadlines etc. You can easily drag cards from one list to another, create new cards and share with the project team. We’re only using the open/free version not the Business Class version and it seems to work fine for us. Trello worked pretty well for our digital library development project, particularly in terms of focusing on which developments went into which software release. So it will be interesting to see how well it works on a project that is a bit more exploratory and research-based.
Looking at what work has already been done in this area is an important part of the project. So at an early stage we’re doing a literature review. That’s partly to be able to understand the context that we’re working in and to give credit (through citations) of ideas that have come from other work, but specifically to look at techniques people have been using to investigate the relationship between student success, retention and library use. We’re not expecting that there will be an exact study that matches up with our conditions (the lack of student book loans data for one thing), but the approaches other people have taken are important for us to understand. We’re also hoping to write up the work for publication, so keeping track of citations for other work is vital. To do that we’re using RefMe and have setup a shared folder for the members of the project team to add references they find. RefMe seems to be quite good at finding the full references from partial details, although there are a few we’re adding in manually. To help with retrieving the articles we’re adding in the local version of the URL so we can find the article again. The tool also allows you to add notes about the reference, which can be useful. RefMe has an enormous range of reference styles and can output in a range of formats to other tools such as Zotero, Mendeley, RefWorks or Endnotes for example.
To keep interested parties up-to-date with project activities we’re using a wordpress blog, for this project the blog is at www.open.ac.uk/blogs/LibraryData. We’re fortunate in that we’ve an institutional blog environment established using a locally hosted version of the wordpress software. Although it isn’t generally the latest version of the wordpress blog software, there’s little maintenance overhead, we can track usage through the Google Analytics plug-in, and it integrates in with our authentication system, so it does the job quite well. We’ve used blogs fairly consistently through our projects and they have the advantage of allowing the project team to get messages and updates out quickly, encourage some commenting and interaction, and allow both short update-type newsy items as well as some more in-depth reflective or detailed pieces. They can be a relatively informal communication channels, are easy for people to edit and update and there’s not much of an overhead to administration. Getting a header sorted out for the blog is often the thing that takes up a bit of time.
Other tools and tools for the next steps
The usual round of office tools and templates are being used for project documents, for project mandates and project initiation documents, through to documentation of Risks, Assumptions, Issues and Dependencies, Stakeholder plans and Communications plans. These are mainly in-house templates in MS Word or Excel. Having established the project with an initial set of tools, attention is now turning to approaches to manage the data and the statistics. How do we manage the large amount of data to be able to merge datasets, extract data, carry out analyses, develop and present visualisations? Where can we use technologies we’ve already got, or already have licences for, where might we need other tools?
We’ve started using BrowZine (browzine.com) as a different way of offering access to online journals. Up until recently there were iOS and Android app versions but they have now been joined by a desktop version.
BrowZine’s interesting as it tries to replicate the experience of browsing recent copies of journals in a physical library. It links into the library authentication system and is fed with a list of library holdings. There are also some open access materials in the system.
You can browse for journals by subject or search for specific journals and then view the table of contents for each journal issue and link straight through to the full-text of the articles in the journals. In the app versions you can add journal titles to your personal bookshelf (a feature promised for the desktop version later this year) and also see when new articles have been added to your chosen journals (shown with the standard red circle against the journal on the iOS version).
A useful tool if there are a selection of journals that you need to keep up to date with. Certainly the ease with which you can connect with the full-text contrasts markedly with some of the hoops that we seem to expect users to cope with in some other library systems.
To Birmingham at the start of last week for the latest Jisc Library Analytics and Metrics Project (http://jisclamp.mimas.ac.uk/) Community Advisory and Planning group meeting. This was a chance to catchup with both the latest progress and also the latest thinking about how this library analytics and metrics work will develop.
At a time when learning analytics is a hot topic it’s highly relevant to libraries to consider how they might respond to the challenges of learning analytics. [The 2014 Horizon report has learning analytics in the category of one year or less to adoption and describes it as ‘data analysis to inform decisions made on every tier of the education system, leveraging student data to deliver personalized learning, enable adaptive pedagogies and practices, and identify learning issues in time for them to be solved.’
LAMP is looking at library usage data of the sort that libraries collect routinely (loans, gate counts, eresource usage) but combines it with course, demographic and achievement data to allow libraries to start to be able to analyse and identify trends and themes from the data.
LAMP will build a tool to store and analyse data and is already working with some pilot institutions to design and fine-tune the tool. We got to see some of the work so far and input into some of the wireframes and concepts, as well as hear about some of the plans for the next few months.
The day was also the chance to hear from the developers of a reference management tool called RefMe (www.refme.com). This referencing tool is aimed at students who often struggle with the typically complex requirements of referencing styles and tools. To hear about one-click referencing, with thousands of styles and with features to intergrate with MS Word, or to scan in a barcode and reference a book, was really good. RefMe is available as an iOS or Android app and as a desktop version. As someone who’s spent a fair amount of time wrestling with the complexities of referencing in projects that have tried to get simple referencing tools in front of students it is really good to see a start-up tackling this area.
For a few months now we’ve been running a project to look at student needs from library search. The idea behind the research is that we know that students find library search tools to be difficult compared with Google, we know it’s a pain point. But actually we don’t know in very much detail what it is about those tools that students find difficult, what features they really want to see in a library search tool, and what they don’t want. So we’ve set about trying to understand more about their needs. In this blog post I’m going to run through the approach that we are taking. (In a later blog post hopefully I can cover some detail of the things that we are learning.)
Our overall approach is that we want to work alongside students (something that we’ve done before in our personalisation research) in a model that draws a lot of inspiration from a co-design approach. Instead of building something and then usability testing it with students at the end we want to involve students at a much earlier stage in the process so for example they can help to draw up the functional specification.
We’re fortunate in having a pool of 350 or so students who agreed to work with us for a few months on a student panel. That means that we can invite students from the panel to take part in research or give us feedback on a small number of different activities. Students don’t have to take part in a particular activity but being part of the panel means that they are generally pre-disposed to working with us. So we’re getting a really good take-up of our invitations – I think that so far we had more than 30 students involved at various stages, so it gives us a good breadth of opinions from students studying different subjects, at different study levels and with different skills and knowledge.
We’ve split the research into three different stages: an initial stage that looked at different search scenarios and different tools; a second stage that drew out of the first phase some general features and tried them on students, then a third phase that creates a new search tool and then undertakes an iterative cycle of develop, test, develop, test and so on. The diagram shows the sequence of the process.
The overall direction of the project is that we should have a better idea of student needs to inform the decisions we make about Discovery, about the search tools we might build or how we might setup the tools we use.
As with any research activities with students we worked with our student ethics panel to design the testing sessions and get approval for the research to take place.
We identified six typical scenarios – (finding an article from a reference, finding a newspaper article from a reference, searching for information on a particular subject, searching for articles on a particular topic, finding an ebook from a reference and finding the Oxford English Dictionary). All the scenarios were drawn from activities that we ask students to do, so used the actual subjects and references that they are asked to find. We identified eight different search tools to use in the testing – our existing One stop search, the mobile search interface that we created during the MACON project, a beta search tool that we have on our library website, four different versions of search tools from other Universities and Google Scholar. The tools had a mix of tabbed search, radio buttons, bento-box-style search results, chosen to introduce students to different approaches to search.
Because we are a distance learning institution, students aren’t on campus, so we set up a series of online interviews. We were fortunate to be able to make use of the usability labs at our Institute of Educational Technology and used Teamviewer software for the online interviews. In total we ran 18 separate sessions, with each one testing 3 scenarios in 3 different tools. This gave us a good range of different students testing different scenarios on each of the tools.
Sessions were recorded and notes were taken so we were able to pick up on specific comments and feedback. We also measured success rate and time taken to complete the task. The features that students used were also recorded. The research allowed us to see which tools students found easiest to use, which features they liked and used, and which tools didn’t work for certain scenarios.
For the second phase we chose to concentrate on testing very specific elements of the search experience. So for example, we looked at radio buttons and drop-down lists, and whether they should be for Author/Title/Keyword or Article/Journal title/library catalogue. We also looked at the layout of results screens, and the display of facets, to ask students how they wanted to see date facets presented for example.
We wanted to carry out this research with some very plain wireframes to test individual features without the distraction of website designs confusing the picture. We tend to use a wireframing tool called Balsamiq to create our wireframes rapidly and we ran through another sequence of testing, this time with a total of 9 students in a series of online interviews, again using teamviewer.
By using wireframing you can quickly create several versions of a search box or results page and put them in front of users. It’s a good way of being able to narrow down the features that it is worth taking through to full-scale prototyping. It’s much quicker than coding the feature and once you’ve identified the features that you want your developer to build you have a ready-made wireframe to act as a guide for the layout and features that need to be created.
The last phase is our prototype building phase and involves taking all the research and distilling that into a set of functional requirements for our project developer to create. In some of our projects we’ve shared the specification with students so they can agree which features they wanted to see, but with this project we had a good idea from the first two phases what features they wanted to see in a baseline search tool, so missed out that stage. We did, however, split the functional requirements into two stages: a baseline set of requirements for the search box and the results; and then a section to capture the iterative requirements that would arise during the prototyping stage. We aimed for a rolling-cycle of build and test although in practice we’ve setup sessions for when students are available and then gone with the latest version each time – getting students to test and refine the features and identify new features to build and test. New features get identified and added to what is essential a product backlog (in scrum methodology/terminology). A weekly team meeting prioritises the task for the developer to work on and we go through a rolling cycle of develop/test.
Reflections on the process
The process seems to have worked quite well. We’ve had really good engagement from students and really good feedback that is helping us to tease out what features we need to have in any library search tool. We’re about half way through phase three and are aiming to finish off the research for the end of July. Our aim is to get the search tool up as a beta tool on the library website as the next step so a wider group of users can trial it.
We’ve been using Trello (http://trello.com) as a tool to help us manage the lists of tasks in the digital library/digital archive project that we’ve been running. After looking at some of our existing tools (such as Mantis Bug Tracker for example) the team decided that they didn’t really want the detailed tracking features and didn’t feel that our standard project management tools (MS Project and the One Page Project Manager, or Outlook tasks) were quite what we needed to keep track of what is essentially a ‘product backlog‘, a list of requirements that need to be developed for the digital archive system.
Trello’s simplicity makes it easy to add and organise a list of tasks and break them down into categories, with colour-coding and the ability to drag tasks around from one stream to another. Being able to share the board across the team and assign members to the task is good. You can also set due dates and attach files, which we’ve found useful to use to attach design and wireframe illustrations. You can set up as many different boards as you need to so can breakdown your tasks however you want to. The boards scroll left and right so you can go to as many columns as you need to.
We’ve been using it to group together priority tasks into a list so the team know which tasks to concentrate on, and when the tasks are done the team member can update the task message so each task can be checked and cleared off the list.
We’re mainly using Trello on the desktop straight from the website, although there is also an ipad app that seems to work well. For a fairly small team with just a single developer Trello seems to work quite well. It’s simple and easy to use and doesn’t take a lot of effort to keep up to date, it’s a practical and useful tool. If you had a larger project you might want to use more sophisticated tools that have some ability to track progress and effort and produce burndown charts for example, but as a simple way of tracking a list of tasks to be worked on, it’s a useful project tool.
To Birmingham today for the second meeting of the Jisc LAMP (library analytics and metrics project) community advisory and planning group. This is a short Jisc-managed project that is working to build a prototype dashboard tool that should allow benchmarking and statistical significance tests on a range of library analytics data.
The LAMP project blog at http://jisclamp.mimas.ac.uk is a good place to start to get up to speed with the work that LAMP is doing and I’m sure that there will be an update on the blog soon to cover some of the things that we discussed during the day.
One of the things that I always find useful about these types of activity, beyond the specific discussions and knowledge sharing about the project and the opportunity to talk to other people working in the sector, is that there is invariably some tool or technique that gets used in the project or programme meetings that you can take away and use more widely. I think I’ve blogged before about the Harvard Elevator pitch from a previous Jisc programme meeting.
This time we were taken through an approach of carrying out a review of the project a couple of years hence, where you had to imagine that the project had failed totally. It hadn’t delivered anything that was useful, so no product, tool or learning came out of the project. It was a complete failure.
We were then asked to try to think about reasons why the project had failed to deliver. So we spent half an hour or so individually writing reasons onto post-it notes. At the end of that time we went round the room reading out the ideas and matching them with similar post-it notes, with Ben and Andy sticking them to a wall and arranging them in groups based on similarity.
It quickly shifted away from going round formally to more of a collective sharing of ideas but that was good and the technique really seemed to be pretty effective at capturing challenges. So we had challenges grouped around technology and data, political and community aspects, and legal aspects for example.
We then spent a bit of time reviewing and recategorising the post-it notes into categories that people were reasonably happy with. Then came the challenge of going through each of the groups of ideas and working out what, if anything, the project could or should do to minimise the risk of that possible outcome happening. That was a really interesting exercise to identify some actions that could be done in the project such as engagement to encourage more take up.
A really interesting demonstration of quite a powerful technique that’s going to be pretty useful for many project settings. It seemed to be a really good way of trying to think about potential hurdles for a project and went beyond what you might normally try to do when thinking about risks, issues and engagement.
It’s interesting to me how so many of the good project management techniques work on the basis of working backwards. Whether that is about writing tasks for a One Page Project Plan based on describing the task as if has been completed, e.g. Site launch completed, or whether it is about working backwards from an end state to plan out the steps and the timescale you will have to go through. These both envisage what a successful project looks like, while the pre-mortem thinks about what might go wrong. Useful technique.
Challenge of the moment was to put together a video and although I’ve played around with videos I haven’t particularly had too much experience of having to create and edit a video. So I thought it would be interesting to blog about the tools I’ve been using, how I’ve found them and what some of the challenges were. One of the principles was to see what I could do with the standard office-type tools I had, what was available free on the web or downloadable.
Assembling the stuff
The starting point was to pull together some content from a variety of different sources, mainly videos and images. For the videos one of the challenges was to get hold of copies of the videos so they could be edited locally. One of the useful tools was KeepVid, this tool lets you download streaming video from some locations such as YouTube. The first few times I tried to use it I ended up downloading iLivid until I worked out to ignore the big coloured buttons as they were adverts that presumably are paying for the tool to be free. But once I’d worked out that all I needed to do was to paste in the URL and click the Grey download it worked really well and gives you a choice of several video formats. I chose MP4 and saved the video locally. (I don’t know why but Firefox always annoys me in not letting me choose where to save a downloaded file and just putting it in the download folder where I have to move it).
KeepVid supports quite a range of streaming sites. The FAQ lists
Which websites does KeepVid support?
Dailymotion, 4shared, 5min, 9you, AlterVideo.net, Aniboom, blip.tv, Break, Clipfish.de, Clipser, Clip.vn, CollegeHumor, Cracked, Current, dekhona.com, DivxStage.eu, eHow, eBaumsWorld, Ensonhaber, Facebook, Flickr, Flukiest, FunnyJunk, FunnyOrDie, FunnyPlace.org, Metacafe, MySpace, Ning, Photobucket, RuTube, SoundCloud, Stagevu, TED, Tudou, TwitVid, VBOX7, videobb, VideoWeed.es, Veoh, Vimeo, zShare.net
Some videos couldn’t be downloaded using KeepVid so after looking around at options I went with using Camtasia Recorder 8 from TechSmith. This has a 30 day free trial which is enough time to play around with it and test it out. Camtasia is screen recording software designed for screen capture and quite commonly used for creating learning activities so it was something I’d come across before. One of the things that Camtasia allows you to do is to capture activites on a screen, typically you might record an activity of navigating around a website. But for my purposes I’ve used it to capture a video playing on the screen. Camtasia lets you select an area of the screen and also adjust the sound levels. [Note: The sound levels are really important when it comes to editing your final video.]
I’ve also made use of Powerpoint as a means of creating some images to use between videos and images to try to tell a story and set some context from one sequence to the next.
Having created the slides I’ve then just used Jing (again from TechSmith) to screencapture each slide to turn them into .PNG image files to use in the video. I could have used any of a number of different tools, but Jing is one I use all the time. It just sits at the top of my desktop and I regularly use it whenever I want to grab bits of an image and just save it to use elsewhere. So I use it all the time for images for this blog for example. It’s simple to just select an area of the screen and capture that as an image. Often I’ll use Jing in combination with something like Paint.net to select an image and then resize or crop it. Pretty basic stuff but it just gives me enough flexibility to tweak things without going to a more sophisticated and complicated tool.
Editing the stuff
Having assembled much of the raw materials and worked out a rough storyboard based on the original idea for the video I’ve then gone back to Camtasia, this time Camtasia Studio 8 (with my 30 day free trial) to edit the video. One of the features of Camtasia Studio is that you can use it to edit your video extracts to just the clips that you want. It’s a pretty powerful tool and I’m only scratching the surface of its functionality. In retrospect and looking at the features I’m pretty sure that I could have actually used Camtasia Studio for much of the video editing stage in its entirety.
But I’d already started playing with another tool to build my video. I’d started playing with Windows Movie Maker as the tool to build the video. Windows Movie Maker is available as a free download as part of Windows Essentials 2012 from Microsoft for Windows 7 and Windows 8. I’d not used it at all before and I must admit that it was pretty straightforward to assemble a collection of clips and images together into a video to tell a story. It lets you edit your clips together and shows them as a succession of elements, very much like a film. It’s even got a little skeuomorphic trick of showing the top and bottom film guide holes at the start and end of each of your clips. (Incidentally I notice this week news that skeuomorphism is out for Apple’s new iOS 7). It’s interesting that Windows Movie Maker also uses a film icon for each of your projects.
It’s quite a simple to use but a surprisingly powerful tool. It’s easy to add videos and images, you can add sound tracks or music and fine tune the length that still images will be shown on screen. You can use a number of transition tools to manage the changeover from one screen to the next and achieve some almost cinematic effects (with the temptation to do far too much). You can also adjust your music track to fit your video and fade in and out. There are also scrolling titles and credits features where you can determine how your closing credits will appear (or replicate the sci-fi big block of text scrolling up the screen and disappearing off into the distance … I resisted the temptation!).
There is also quite a good set of tools for uploading to several video sharing sites or packaging your video for use, although if you just want to save as a video file then MP4 or Windows Media file formats seem to be the main options. But overall really impressed with the ease of use of the tool.
Easier than I thought it might be to get something that looks OK. Has a sequence of video and still images cut together with reasonable transitions, start and end titles and a mix of overlaying music. Impressed at the range of tools that are out there that are essentially free (once you’ve invested in a reasonable spec PC, Windows 7, MS Office, fast internet access – so there’s a barrier at that level). Camtasia is something to follow up and learn more about. You can go quite a long way with tools that are easily avaialble without spending a lot of money on a high-specification tool.
Saving the file to create your final output takes some time, even on a pretty good specification laptop. And file sizes are large (150mb for something around 8 minutes). But maybe 150mb isn’t a large file now when 1tb external drives are pretty cheap. Editing the audio and particularly getting the sound levels right is something that is quite challenging. Where you’ve a mix of videos with the sound tracks already on them then it isn’t so straightforward to get everything at the right level so a better sound-editing tool might have been good. But how easy it would be to extract all the sound and re-record it I’m not sure. Something else to learn.
But a good learning opportunity and interesting to work through what you can do.
It was great to see this week that the latest opportunity on the Jisc Elevator website is one for students to pitch ideas about new technology ideas. That’s really nice to see something that involves students in coming up with ideas and backing it up with a small amount of money to kickstart things.
Using students as co-designers for library services and in particularly in relation to websites and technology is something that I’m finding more and more compelling. A lot of the credit for that goes to Matthew Reidsma from Grand Valley State University in US, whose blog ‘Good for whom?‘ is pretty much essential reading if you’re interested in usability and improving the user experience. I’m starting to see that getting students involved in co-designing services is the next logical step on from usability testing. So instead of a process where you design a system and then test it on users, you involve them from the start, by asking them what they need, maybe then getting them to feedback on solution designs and specifications and then going through the design process of prototyping, testing and iterating, by getting them to look at every stage. Something that an agile development methodology particularly lends itself to. Examples where people have started to employ students on the staff to help with getting that student ‘voice’ are also starting to appear.
There are some examples of fairly recent projects where Universities have been getting students (and others outside the institution) involved in designing services, so for example the Collaborate project at Exeter that looked at using students and employers to design ’employability focussed assessments’. There is also Leeds Metropolitan with their PC3 project on the personalised curriculum and Manchester Metropolitan’s ‘Supporting Responsive Curricula’ project. And you can add to that list of examples the Kritikos project at Liverpool that I blogged about recently.
For us, with our focus on websites and improving the user experience we’ve been working with a group of students to help us with designing some tools for a more personalised library experience. I blogged a bit about it earlier in the year. We’re now well into that programme of work and have put together a guest blog post for Jisc’s LMS Change project blog ‘Personalisation at the Open University’. Thanks to Ben Showers from Jisc and Helen Harrop from the LMS Change project for getting that published. Credit for the work on this (and the text for the blog post) should go to my colleagues: Anne Gambles, Kirsty Baker and Keren Mills. Having identified some key features to build we are well into getting the specification for the work finalised and start building the first few features soon. It’s been an interesting first foray into working with students as co-designers and one I think has major potential for how we do things in the future.
Infographics and data visualisations seem to be very popular at the moment and for a while I’ve been keeping an eye on visual.ly as they have some great infographics and data visualisations. One of the good things about the visual.ly infographics is that there is some scope to customise them. So for example there is one about the ‘Life of a hashtag’ that you can customise and several others around facebook and twitter that you can use.
I picked up on twitter the other week that they had just brought out a Google Analytics infographic. That immediately got my interest as we make a lot of use of GA. You just point it to your site through your Google Analytics account and then get a weekly email ‘Your weekly insights’ created dynamically from your Google Analytics data.
It’s a very neat idea and quite a useful promotional tool to give people a quick snapshot of what is going on. So you get Pageviews over the past three weeks, what the trends are for New and Returning Visitors and reports on Pages per visit and Time on site and how that has changed in the past week.
It’s quite useful for social media traffic showing how facebook and twitter traffic has changed over the past week and as these types of media are things that you often want quite quick feedback on it is a nice visual way of being able to show what difference a particular activity might have had.
Obviously as a free tool, there’s a limit to the customisation you can do. So it might be nice to have visits or unique visitors to measure change in use of the site, or your top referrals, or particular pages that have been used most frequently. The time period is something that possibly makes it less useful for me in that I’m more likely to be want to compare against the previous month (or even this month last year). But no doubt visual.ly would build a custom version for you if you wanted something particular.
But as a freely available tool it’s a useful thing to have. The infographic is nicely presented and gives a visually appealing presentation of analytics data that can often be difficult to present to audiences who don’t necessarily understand the intricacies of web analytics.
The Google Analytics Visual.ly infographic is at https://create.visual.ly/graphic/google-analytics/