You are currently browsing the monthly archive for December 2010.

Ebooks and ebook readers
Although I’m pretty convinced about the value of ebooks (with the usual provisos about the annoyance of the variety of proprietary formats and rights restrictions) I’ve generally been a bit sceptical about ebook readers in the past and haven’t been all that impressed with some of the early versions.  But, I’ve finally decided to commit to one, partly down to coming to realise that the piles of already read books in the backroom were starting to act as effective wifi insulation for my desk and masking the signal, and partly because it would be convenient to carry more than one book around to be read on bus or train or whatever.  I also wanted a device that could get internet access with a screen that was larger than a mobile phone and more portable (and cheaper) than an ipad/netbook-type device.  So just before Christmas I ordered an Amazon Kindle.

First Impressions
I went for the 3G version rather than the wifi only version.  The 3G version lets you continue to receive downloaded content even when you are away from a wifi connection.  Kindle CaseIt’s certainly been in my mind to see if I can use it as a device when I’m away from the office but I’m not yet sold on the idea that the 3G version offers a lot more value as I’m only likely to be sending content to it from a laptop or some other internet-connected device (whether I’m buying content online or directly connected to transfer content).

I also bought a cover with it to protect the device, and that seems to work really well.  A leather cover with a grey soft inner.  The Kindle clicks into the cover with a couple of hooks and there is an elasticated strap to secure the cover closed.  It’s about the size of a slim paperback, like an early Penguin and it weighs about as much as a hardback book.  It feels fairly solid and robust but is small and light enough to fit into a bag.

On the Kindle itself there are a few buttons and conections on the bottom (volume control, headphone socket, socket for USB connection and on/off slider).  Otherwise there are forward and back buttons repeated on each side and a qwerty keyboard taking up the bottom quarter of the device.

Kindle in caseAs well as the standard keys there are shift and Alt keys, a key (Aa) that lets you change the text size and orientation, a Home, Menu, Del and Back key and a five way navigation tool.   The keyboard seems very small to me, but then again it is larger than a mobile phone keyboard or iphone pop-up keypad, and seems designed to be used in two hands with your thumbs pressing the keys.  That is probably easier if you have smaller hands and as a first impression I think I’d like the five-way navigation button to be bigger.  As an observation there’s a good 1cm between the top row of letters and the bottom of the screen so maybe there’s space for slightly larger keys.

There’s also a SYM key that you press to get numbers and the variety of symbols essential for email and internet (although numbers can be typed in from the top row of letters and Alt).

Kindle keyboardTesting out the keyboard for typing I’m finding that it is possible for me to type with it although I wouldn’t want to use it for anything lengthy.  But if you are someone that is used to texting on a phone then I’m sure you’d find it pretty easy.

The start page for the device is accessed from the Home button and once you are on the home page the menu button gets you into the various settings of the device.

Getting started
I’ve been really impressed with how easy it was to get setup with it.  It came already registered against my Amazon account and was quite straightforward to get the home wifi configured.  It’s pretty easy to setup collections to organise your content and I’ve played around with a few ebooks now.  Ones you order from Amazon arrive pretty quickly electronically and yKindle screenou can also plug the device into a laptop to transfer files (PDFs and some ebook formats for example).  All those features seem to work well.  Reading ebooks on the device is fine, it’s easy to page through the books, the screen is sharp and easy to read.  Obviously everything is in greyscale (although interestingly the ebooks themselves seem to have colour in them judging by the samples I’ve looked at on the Kindle for PC reader).  It’s possible to add annotations to the text of ebooks.

Experimental features
One of the experimental features (accessed from the menu button when on the home page) is an internet browser.  This works when you have wifi connected (a version that works on 3G would be nice but maybe not very commercial).   It works better than I expected, most pages display OK, you can navigate around the page and it seems to pick up on screenreader-type approaches by jumping from link to link.  It’s possible to use it to access general internet sites, and it’s usuable for web-based email, outlook web access and even editing blogs.  There are features such as zooming into sections and article displays to make it easier to use and it looks like Amazons developers have put a fair bit of effort into thinking about how to make a browser work on the Kindle.    You do need to use the features such as zooming and orientation to get about the web but you can do so pretty well.

Tweeting from theKindle browser Kindle
There’s even a twitter version that is designed for the Kindle. lets you login to your twitter account and then uses letters to show the latest tweet (L) or Direct messages (D).  This works pretty well although there seem to be some limitations about the browser remembering your account details.  There is also some social networking built into the kindle commenting features that I haven’t yet explored.

Overall first impressions and what next
In general I’m pretty impressed with the device, it’s easy to setup and get started, easy to get new content and even some of the experimental features seem to work well.  I still need to play around with exploring the tools to convert ebook and other content so it can be used on the Kindle.   I certainly want to try it to see how useful it will be as a device to use at work, maybe see how easy it is to take copies of papers for meetings and store and retrieve it from the Kindle.  Although there’s a Software Developers Kit (SDK) it isn’t obvious that there are a large number of Kindle ‘apps’ out there, but if the Kindle really has been Amazon’s top selling product over Christmas then maybe there is a big market.  So things like a Kindle Calendar tool, a better note making tool (i.e. one that isn’t linked to notes on a book), password restrictions at a collection or feature level (you might want to allow your child to read childrens books on it but not browse the internet or read your thrillers, for example) would be useful features.  So I’m off to start building up my collection of ebooks and click that button on Amazon to prompt publishers of new books that I’d like to read them on the Kindle.

The Plan
As with any project one of the key steps is to plan how you are going to go about your task.  So with our library website migration we sat down to work through the stages we will need to go through between setting out the initial objectives, as described in an earlier blog post and the new site being launched.  So in this blog post I’m going to cover the stages we are planning to break our website migration project into.  Some will be unique to us as they reflect our particular circumstances, but others will be more generic.

We’ve arranged the stages as a series of workpackages covering different themes rather than as a straightforward set of tasks arranged by date.  It tends to be a bit easier to work out where you are by grouping the tasks in themes or workpackages.  To draw up the plan we’ve been using Excel as a high-level tool.  It’s a bit easier for people to look at an Excel version of the plan (using a great Excel 2007 template for gantt charts) rather than having to use MS Project.  For project management purposes we’ll use an MS Project version.  So the Excel version looks like this:

Website planWe’re planning to split the project into workpackages that cover the different themes of the project, so, for example there are workpackages covering the technical preparation of the site environment, activities around reviewing and migrating content, a workpackage covering usability and accessibility testing, and another one around the creation of a mobile version of the website.   [I’m still getting used to the idea that the mobile library isn’t a large bus full of books – too many years struggling to get reliable online technology onto them I suppose!].

Workpackage 1 – Site setup
This is covering our site preparation activities, from agreeing the URL, through setting up the test environment, loading the design templates, configuring the site structure and setting up the standard website features.  In the main this work is being carried out by our website developer with help from other university colleagues.  The bulk of this work happens in the first few months while we are on the development site.

Workpackage 2 – Content
This covers a wide range of tasks from reviewing the Information Architecture, through looking at User requirements, to reviewing the content, documenting new processes and then training staff, and the actual content migration into the new site.   There’s a lot of preparation needed so it’s one of the earliest workpackages to start.  So we’ve begun with some workshops to look at the current Information Architecture and user requirements.  We’ve been looking at results from surveys, from user feedback and at analytics data to inform that work.  One of the next steps will be to survey users with some specific questions about the website.  This then needs to be pulled together into a prototype Information Architecture that can be tested with users later in the year.

Workpackage 3 – dynamic content and search
Our current website displays some content as lists taken from our library catalogue and other sources.  These include lists of databases, ejournals and FAQs.  Our analytics data show that these are the most popular elements of the site so we need to retain these in the new site.  Generating them dynamically from systems where they are already being updated is much more sensible than having to create a separate static list and then update it.  Search is also a very important element of our site, whether that is search of our electronic resources, library catalogue or the website itself.

Workpackage 4 – Help and Support
Analysing our site shows that a large proportion of our content is Help and Support materials such as FAQs, How To guides and other help materials.  Finding a way of helping users find relevant help materials more easily on the new site is a key objective for this workpackage.

Workpackage 5 – Mobiles
We’ve had a mobile version of the website for a few years now.  It gets a steady stream of users, mainly ipad and iphone but with an increasing number of android phone accesses.  As part of this workpackage we are working with other university colleagues to create a new version of the mobile library website.

Workpackage 6 – Testing
Although the plan is very much to test as we go along, we’ve put our testing into a separate workpackage, as much as anything, to make sure it gets the attention it needs.  We’re planning our accessibility testing using a range of devices we have such as screenreader software.  We also plan usability testing with eye-tracker software to check the prototype site.  And we expect to make the site available in ‘beta’ to gain feedback and allow us to address any issues before the final launch.

Final thoughts
Alongside the workpackages are the usal activities around project management, reporting on progress, managing risks and issues and communicating to stakeholders/users about progress.  As we go through the project I’m going to try to update this blog with thoughts and reflections on how the project is taking shape.

I’m finding it interesting to watch the reaction to the news that Delicious is going to be shut-down by Yahoo (reported by Techcrunch here).   There was an initial expression of disbelief that a tool that was pretty much universally accepted as being well-liked and valued was being abandoned with no clear understanding of why that should be.   And within a short period of time there were petitions such as this one.

I saw a few comments on twitter along the lines of, that was the risk with social media-type tools, that they couldn’t be relied on.  But I’m not convinced that that’s a feature that is necessarily just the case with social media tools.  Any commercial company (and Yahoo falls into that category anyway – even if users haven’t generally paid for delicious) could make the decision to pull a product.  While you might have a contractural arrangement with them, you could be left high and dry if they went out of business, for example.

But what has happened during the day was that people started to find alternatives, diigo for example.  And there’s an excellent list on Phil Bradley’s blog to start you off.   Delicious is still running, so there’s time to export your bookmarks to any of a number of alternatives.  There are also the reference management tools such as Zotero that could be considered.  We’re going to look at the alternatives for our tag cloud and list of tools on our website, and also have the options to go for static lists or even RSS feeds from blog posts.

But, what struck me about the story of delicious was two things.  Firstly, you can’t rely entirely on these tools being persistent – fashions and business models (and the recession) can strike.  So you need a backup plan.  Secondly, web 2.0 actually has the plan – as Phil’s blog points out – he’s found 28 alternatives – maybe none of them are an exact match – but amongst the list there’s bound to be something that does what you want.  You can export your bookmarks and move on to the next thing.  Sort of an interesting case study really.

Well, apparently Delicious isn’t closing according to their blog.  But is looking for an ‘ideal home for Delicious outside of the company’.  Hope they find something.

Over the next few months we are planning to redevelop our library website.  It’s a long process, there are a lot of different stakeholders and a lot of stages to go through.  I’m going to try to blog about the process as we go along to try to keep a record of what we do, how it goes and what lessons we discover as we go through the whole website migration project.

I ‘m going to start with covering why do we want to change the website.  What is driving the change?

  • Feedback from users – comments from surveys and questionnaires indicate that users don’t find the navigation easy, don’t like the design that much and don’t find it that easy to find relevant help information.  They particularly have difficulty with finding access to electronic resources.
  • Feedback from staff that they would like different things to be on the homepage to what students want, e.g. services such as book renewals and document delivery, which in our institution is mostly of interest to staff
  • We have an increasing range of different types of users (students, academics and researchers for example) and partners who all have a slightly different need from the website.  The current website offers a ‘one-size-fits-all’ approach and doesn’t allow users to have a view of it that meets their specific needs or to allow them to customise or personalise it.
  • ‘To be where our users are’ – increasingly we are wanting users to access our services directly from where they are (in the VLE, or in Google Apps) rather than by having to come to our website.  So we want to facilitate that.
  • We want to make more use of dynamic content, which we do to a certain extent already by taking lists of resources from the catalogue and presenting in the website.
  • The Information Architecture of the site was designed in 2008 and we want to review it to reflect changes in the way our services have developed.
  • We want to update the design of the website to give it a more consistent University style
  • The underlying website technology ‘Cold Fusion’ is no longer a preferred website technology for us and we want to develop the site using technology that is more Web 2.0 friendly.
  • Library staff working with other University units has thrown up some different ideas about how we might present help and support on the website.  This has lead us to thinking about trying to deliver it in a different way – in context, so users get relevant help and support materials prevented in context, rather than being sent to a help page and being expected to search through lists of pages, videos and podcasts.

Key objectives
So our key objectives for the website migration are:

  • improved user experience
  • easier to navigate
  • improved search – [being addressed partially by introducing a new third-party search system from Ebsco]
  • personalised – so students get things that are relevant to them, academics get services relevant to them etc
  • context-specific help and support – placed wherever you are on the website
  • easy to access information about how to contact us to get help
  • a cleaner, fresher design
  • in a more Web 2.0 friendly development environment that lets us develop new services to keep pace with user needs
  • ideally that we can develop services once that can then be deployed to wherever our users are – a type of ‘build once, use many times’ approach

Next time I want to start to look at the approach we are taking and the steps we are planning.

Website developments
Most websites go through a process of continued development and change.  I’d argue that they are never really finished, it’s just that every so often they get to the stage where you want to start again from a clean (or clean’ish) slate.  Often the motivation is as much a feeling of we can’t do what we want to do with the site now, mixed with there’s other sites out there that look better, have better navigation or features for example.  I’d also say that there’s an element of website ‘fashion’ that make sites look out-of-date even when they are perfectly functional but that it can drive user perceptions of a site.  So refreshing a site at regular intervals is something I’d want to do.  In between regular site redevelopments there will always be a myriad of minor or even major changes to the site and it is one of the challenges of website management to keep track of (and manage) these changes.  To do that we’ve developed a process around a Website Improvement Plan.

The Website Improvement Plan
In essence the Website Improvement Plan is a document that outlines the steps you are going to take to address issues with the site and it works to shape the development of the site over a period of time.  We only use the plan to record significant changes on the site – so we wouldn’t record minor content changes but would record an activity like adding a completely new section of content with several pages.  There is quite a variation in the scope of the activities – so it can range from a task to implement a new search system, or to add a rolling news feature to the home page, for example.  We use the plan as a rolling document across each year updating it as we go along.  We have an approval process to get new items into the plan and we record the following information about each item:

  • a unique reference number for each item
  • a description of the item
  • the date it was added to the plan
  • details of the resources needed to deliver the item and a note if it forms part of a project
  • the owner of the item – the person responsible for the item
  • the date the change is required
  • a category – we split tasks into categories such as Content; Infrsastructure; User Experience etc
  • details of when the item was approved
  • a status column – updated monthly that is used to track progress of the task

We then colour-code each item to make it easy to check which items are at which stage.  So pending items are unhighlighted, approved but not yet started (yellow highlight), in progress (green), completed (grey) as seen below

Website Improvement Plan
Pros and Cons of this approach
The processes around approving items to go into the plan and reviewing them (done monthly at a Web Quality Improvement Group meeting) help to focus people’s attention on the work taking place around the website and let people identify where there are obvious gaps in work that is planned.  We feed in activities from various projects and strategic plans to take care of the higher-level activities.  It’s an open document so it is available to all library staff to view and it’s reported to various library management groups.  As a tool to bring together everything we are doing it seems to work.

On the negative side – there is some added bureaucracy around managing the plan and some potential delays in making sure that people have signed off developments before they take place.    There is a danger that you can end up managing the plan rather than the service.

Other thoughts
We keep version control over the document so we can compare different versions of it.  This means that we can track the number of tasks that are completed or in progress and how that changes across the year as a form of fairly crude metric around website development.  Within the unit we also have activity data from staff that we can use to identify the cost of website services.  For us the process seems to work reasonably OK for day-to-day website management.

Starting with Google Analytics mapping
We had a request a couple of days ago to pull together a map that showed which countries visitors to the library website were coming from.   On the face of it – that’s reasonably straightforward as you can use the Map Overlay feature in Google Analytics to show an Intensity Map of where visitors come from.

Google Analytics map overlay

But when we looked at it in more detail we realised that there isn’t all that much flexibility about what the map looks like.  Although it works fine online if you are going to use the image as a JPEG then you lose the features that you get from the mouseovers.  In our case there’s a lot of UK usage but usage outside the UK soon starts to reduce to quite small numbers which only show up as a very light shading on the map.  There is an option to show cities on the map which show up as dots, but there’s limited flexibility in customising the map, you don’t seem to be able to have markers for the countries for example.  So we started to look for alternative ways of mapping the data to see if we could get a better format.

What else can you do?
Fortunately it is quite easy to extract the data from Google Analytics in a way that can be used by other tools.  So the first step is to export the data in CSV format.  Once the data is in CSV format you can edit the file, so we took out some of the other data that Google Analytics includes – such as Average Time on Site and Bounce Rate that we didn’t need for this map.  Then we started looking at a few other tools to use for mapping the data – Google Fusion tables and Google Maps to start with, to see what they would look like or whether they allow you to do anything you can’t do with Google Analytics.

Google Fusion tables
This tool allows you to import data in a variety of file formats – .CSV, .XLS, .XLSX, .ODS and .KML, as well as Google spreadsheets.  .CSV and .KML allow file sizes of up to 100mb, the others only 1mb.  Using the tool is quite straightforward,  you’ll need a Google Account and then from the homepage of Google fusion tables click the New table – Import table option.  Then browse to find and upload the file.  We uploaded the CSV file from Google Analytics. Google Fusion tables mapOnce you have your data imported you can use the visualize option and choose from a couple of map-types (Map or Intensity Map).  On the Map option it’s possible to change the marker type to a more prominent marker or change the colour used.  The intensity map is essentially the same as the default you get on Google Analytics.  You can also zoom in and out but if you zoom out you get multiple versions of the world repeated.  A useful feature is that you can export your data in KML format which is used by Google mapping tools such as Google Maps.

Google Maps
Taking the KML file created through Google Fusion tables you can upload it into Google Maps. Google MapsAgain you need a Google account, and there are very similar options to the other Google tools, but with slight variations.  For example, when you zoom out you get the continents repeated but the markers only appear once.  to create your map click on the MyMaps option and then upload the KML file.   The maps are optimised for viewing online and if you want to produce a map that can be output as a JPEG file they aren’t ideal as you can’t really get a full screen sized map with the countries displayed.

Producing an image for use in a document
What you end up with by taking a screenshot is something that looks like this (on the left from Google Maps) Google Maps screenshotor this (on the right from Google Fusion tables). Google Fusion tablesIn Google Fusion tables you can change the marker style but you don’t seem to be able to do so in Google Maps.  Neither are ideal unless they are being used online.  Although they are good for an overview of the scope of the globe that is covered you can’t easily see all the countries to show which ones you have had visitors from.  I haven’t come across a full screen version that would make it easy to take a screenshot.  In an ideal world you would be able to set the size of the map, define whether you had one copy of the globe or several, choose whether to display the country names, and configure exactly how the markers or colouring is shown.  It would be good if there was consistency between the various Google tools so you could do the same things for all the mapping tools.

Next steps
Although the Google tools don’t do everything you’d want they are pretty easy to use to provide a quick map of visitor locations.  It only takes a few minutes to extract the data from Google Analytics and load it into Google Fusion tables and then present it in a map.  There are other tools around, such as Zeemaps, which may bear investigation to see if they have a more suitable output.

Twitter posts



December 2010

Creative Commons License