You are currently browsing the monthly archive for September 2011.
A couple of reflections on two aspects of data that came to mind this week. Firstly tools to manipulate data from that growing range of datasets; and secondly some reflections on data for decision-making.
Standford University have released an alpha version of a nifty tool to manipulate data, the Data Wrangler It’s a tool that allows you to take some data, paste it into their tool and then play around with reformating it. It seems to be a really powerful tool that lets you select data in a cell then uses what it knows about that data to offer suggested transformations. They demonstrate selecting a US state name and it highlighting all the state names and showing them in a separate column.
When you play around with the tool you see that when you select something within the data it offers you a series of suggestions for extracting the data into a separate column. It lets you go through a series of steps to transform the data into something much more usable. It looks like a really good device to help tidy up a dataset into something that can much more easily be used in a spreadsheet or turned into a visualisation.
There’s a really good introductory video here. At the moment you can only use the tool by going to their website and pasting in some text but the intention is to ultimately release the tool as open source. One definitely to keep an eye on.
Data and decision making
Just before the summer we had our annual library awayday (a sort of stayawayday as it was on campus this year) and we all split up into teams and spent the day playing a business management game. So we all ran a series of leisure/fitness centres, made decisions on fees/charges, staff, marketing etc etc and went through of a series of six-monthly business cycles to see the effect of our decisions.
The key to the game was that we were all presented with a series of sheets of data, covering balance sheets, market intelligence, profit and loss accounts etc. And each round we had an updated set of data to use in the next round. The game was interesting and at the end of several business cycles there was a wide variation in how successful each of the businesses had been. If I remember correctly, anything from a million pound profit to a million pound loss.
That made me think about data. There’s a business mantra that points to the importance of ‘facts and data’, and yes, I agree that facts and data are important when managing any business or service. But it’s only half the story. The data has to be relevant, accurate and meaningful. And even if the data is accurate it has to be interpreted properly and the correct business decisions made. So in our game everyone started with the same facts and data, but made different decisions, leading to radically different outcomes. Which seems to me that the critical thing is making sure that the right decisions are being made on the data.
Following a comment on this blog by Nick Lewis on my quick thought about the Kindle I’ve been looking in more detail about how the pagination matches up on Kindle ebooks across three different devices, a PC, an ipad and an android phone. Both the PC and ipad versions show both pages and locations, the android app just shows the locations.
I’ve only so far looked at three fairly new novels as most of the older material I’ve looked at didn’t seem to show page numbers at all. In all three cases the location will take you to the same place within the text, or at least to the page that contains that location.
When it comes to pages the PC and ipad versions have the same numbers of pages but the content on each page isn’t exactly identical. In the samples I tried it varied from a few lines difference, to several paragraphs or more. Given the variations in screen size then it probably isn’t surprising (and I’ll check it against the Kindle itself to see if there is more variation). But the variation means that to give an accurate reference you are not only going to have to specify that it is the Kindle version of the ebook, but also the device you are using to access it.
There are some other thoughts about citing kindle ebooks on the web on Booksprung for example http://booksprung.com/how-to-cite-a-kindle-ebook that has an interesting discussion. Our institutional Harvard referencing guidance (OU Harvard Guide to Referencing December 2010) has this to say
4.4 ebooks on ebook readers
The correct format for referencing an ebook used on an ebook reader (such as a Kindle reader) is: Author, A. (year of ebook publication) Title of Book [ebook], place of publication, publisher.
Matthews, D. J. (2010) What Cats Can Teach Us [ebook],London, Penguin.
In-text citation: (Matthews, 2010) or Matthews (2010) notes that…
As page numbers are not available on ebook readers, use the chapters instead for indicating the location of a quoted section:
In-text quotation: Matthews notes that ‘kittens are often delightful’ (2010, Chapter 6)
This suggests referring to a chapter rather than trying to find specific page numbers. Although the Kindle at least now does now include page numbers the fact that they aren’t absolute and vary depending on your reader may mean that this is the best solution.
Owen Stephens (@ostephens) has just pointed out that you could use the percentage to provide a citation for exactly where you are referring to in the ebook. That is present in all the different types of Kindle apps I’ve tested.
We finished off and submitted our latest funding bid earlier in the week. It’s something we’ve been putting together across the summer and that has been a bit more challenging than at other times. I must admit that having a funding call that we were very interested in, coming out on the Friday before I went on holiday wasn’t the best timing for us. Fortunately it had been flagged up in the future funding calls for a while so we’d already done some thinking about things we wanted to do.
Trying to do these sort of bids during the Summer presents some different challenges. Bids tend to be increasingly collaborative in their nature so we involve people from other units, on project boards for example and getting their feedback on bid documents. That means getting hold of all sorts of people, at a time when many academics are away for some time. So getting stuff looked at and commented on can be tricky and takes longer. And not everyone that you want to talk to is available.
Because we had been thinking about what we would want to do, we were able to get straight on with pulling the bid into shape. We had a short summary document already and while I was away on leave colleagues started putting it into the right format for the funding document. Having the document in the right format made it much easier to concentrate on filling in the gaps and getting people to contribute to it. So it didn’t take us too long to get to a draft that could be refined and tidied up.
Despite the time of year we managed to get everything together in time. But I do wonder if other people had the same problems about writing bids during the summer and whether it might impact on the number of bids the funding body gets this time round.
As we’ve worked our way through the various stages of the library website project we’ve used a number of different tools and techniques. These have included tools to find out what works on the current site, what users think, to plan how the content should be arranged and to engage with users and staff. As we draw towards the launch of the site it seems like a good time to reflect on those tools, how we have used them and what they have told us.
We’ve been using Google Analytics www.google.com/analytics for some time and in many ways it provides the foundation for our work around the website. So it can be used to identify basic things such as which pages are being used in your current site (and which pages aren’t) and the paths that people use through your site. It can tell you where people are coming from to visit your site – so we know that a large number of our users come from our institutional VLE, which has informed our decisions about some of the terminology we use in the new site – we’re using Library resources to describe our ‘stuff’ to be consistent with the VLE. Analytics gives us a vast amount of data and interpreting that data is key to any redesign project.
Card sort exercises
It’s a pretty well established technique to use card sorting exercises to help with developing the information architecture of the site e.g. http://en.wikipedia.org/wiki/Card_sorting As an early part of the design work we carried out this type of exercise with a group of library staff to try to get an idea of a sensible information architecture for the new site. We ended up using it very much as a starting point rather than a finished article as we were keen to test it with users to validate it. On reflection it does seem to be hard to get people to visualise how the website information architecture will translate to a navigation system in a real website.
An almost obligatory component of workshops, often found in combination with post-it notes and card-sort exercises. Even in these digital times they still seem an inevitable element (along with post-it notes) when a group of people get together to plan something.
Once we had come up with a prototype information architecture we wanted to test it with users to see if it made sense to them. There are a few tools out there that allow you to setup quick tests for users to complete. Essentially they allow you to ask users to navigate through a website structure to find a particular page. They test whether your information architecture makes sense to users. We went for a tool called Plainframe http://plainframe.com It costs a small amount of money but had the advantage for us that the pricing was based on the number of tests you ran rather being time-limited. We were able to offer the test to a group of users and it was certainly useful to see how they reacted to the site and has led to some tweaks in the IA.
We decided fairly early on that we wanted to find some different ways to engage with users. So one of the techniques we used was to run a quick poll on the library website to ask students about features they’d want to see, particularly around ‘induction-type’ content. Positioned prominently on the website homepage we got really good reaction with several hundred responses that greatly helped with defining the content for this section.
The key document for the project was a specification. This set out the audience for the site, the page layouts, the information architecture and navigation. So it was created as an output from the workshops, surveys and exercises. The intention was that it would be focus for discussions about what went into the site and end up as a tool that we would use to get agreement over how the site was to be created. It probably didn’t work quite as well as we’d expected, we found it difficult to get people to engage with the document and visualise what it would look like when turned into a real website. And we ended up having to make changes to things like the IA once the site structure started to take shape.
In an ideal world with a project of this nature you want a functional specification that is created and agreed before any development work starts. In reality it is diificult to do that when you move to a new platform like Drupal as you don’t always know when you start exactly what is and isn’t possible. Users often need to see some form of prototype to be able to decide what they want, and a paper prototype (whilst useful) isn’t always enough.
One of the starting points for us was the results (and particularly the detail) from the Library Survey that was carried out in 2010. Although the results were good it did identify some particular problems with search and accessing library resources.
We’ve also been conscious throughout the project that there is a big issue around terminology (something that libraries seem to have a particular problem with). Users seem to struggle with library terminology so we used further surveys using a tool called Survey Monkey (www.surveymonkey.com) to design questionnaires for users to find out their preferences on the information architecture and terminology of the site. We will also use SurveyMonkey to capture some structured feedback from users once the site goes into the beta and launch stages. We find SurveyMonkey really useful to run surveys and use it extensively to get feedback from users and it lets you design a series of questions and then collect, analyse and download the results in a way that can be easily analysed.
One of the main techniques used to plan out what your website is going to look like when it is built. We’ve made extensive use of wireframes for the home page and sub-pages within the new site. I think they are essential to help to visualise what the site will look like, but I am aware that some people find it hard to visualise what the website will look like from a wireframe and want to see something that looks much more like a prototype website.
W is for … Workshops
We found that we used workshops extensively in the project. In the initial stages it was to help with user requirements and information architecture. We’ve made a lot of use of them to help with the work around arranging the subject categories and subject resources. They can be quite time-consuming to setup, run and particularly to analyse the results, but they have the distinct advantage of being a great way of getting people engaged with the project and creating new ideas.
I got to the end of one of my Kindle books the other day and it suddenly dawned on me that I’d read some of the book on an ipad, some on a PC, some on a phone and the rest on the Kindle device itself. I had entirely taken it for granted that I could pick up from the page that I’d last read up to on another device. I find it interesting that something like that, which would have been pretty much unthinkable a few years ago, now is commonplace.
I’m finding the range of Kindle reading applications to be really useful. I’ve got them on a couple of PCs, an ipad and a phone so it makes it pretty easy to pick up something I’m reading as there’s rarely a time when I don’t have some form of electronic device with me.
It’s good that Amazon’s marketing people worked out that making it easy for people to access their content on as many devices as possible was the way to go. It’s great that there aren’t limitations on which devices you can be reading it on.
Not that there aren’t a few tweaks to some of the Apps and tools that it would be good to see. I’ve got my content on the Kindle arranged in themed folders so it would be good to pick that up some way on the other reading apps. Also I know that as my ebook library grows I’m going to want better tools to search that ‘elibrary’. So it would be good to be able to tag books and search for them (hmmm sounds suspiciously like cataloguing!).
I spent a load of time the other day updating the ipad to ios 4.3.5. The new OS version had some security updates and the ipad hadn’t been updated since I had first got it. So it seemed like a sensible thing to do. What I wasn’t prepared for was quite what a dispiriting experience it turned out to be. I worked out that the upgrade had to be done with the ipad connected to a PC. OK, so perhaps not quite what you’d expect from an internet connected device (and something that apparently will change with ios 5 http://www.apple.com/ios/ios5/features.html with the PC Free feature). But the device complained that it wasn’t connected to the same PC that it had originally been setup from and took an age to download the update files. It seemed quite keen on giving you the option to reset it back to its original settings and less obviously give you the option to backup the device to the PC.
It seemed to me that there was an underlying assumption with the device that you were going to be wanting to synchronise what was on your PC with the ipad rather than taking stuff from the ipad to the PC. When I’m increasingly (and I doubt I’m alone) using the ipad as my main day to day device when on the move I’m more likely to want to push stuff from the ipad to the PC. And using different PCs (work/home etc) is going to be normal, and I’m not necessarily going to want to sync everything across all the devices all the time.
Once the update had finally finished I then discovered that it had messed up all the ipad apps that I’d downloaded since I’d had the ipad. They would open for a second and then close. After a bit of browsing around various Apple forums I tried several different solutions, watching videos and reinstalling various apps. I’ve ended up removing and reinstalling some of them and then frustratingly realising that the tweetdeck ipad app was no longer available to be reinstalled.
All in all not a particularly impressive experience, strangely reminiscent of early Windows updates that could be a bit hit and miss, and not something that you really expect from an Apple device at all.