You are currently browsing the category archive for the ‘agile’ category.

I’m not sure how many people will be familar with the work of Oliver Postgate, and specifically of his stop-motion animation series, The Clangers.  One of the characters in the series is Major Clanger, and he’s an inventor.

Image from Kieran Lamb via https://flic.kr/p/dqthAU

Image Kieran Lamb from https://flic.kr/p/dqthAU

The character always comes to mind to me when thinking about development approaches as an example of a typical approach to user engagement.  So the scene opens with a problem presenting itself.  Major Clanger sees the problem and thinks he has an idea to solve it, so he then disappears off and shuts himself away in his cave.  Cue lots of banging and noises as Major Clanger is busy inventing a device to solve ‘the problem’.  Then comes the great unveilling of the invention, often accompanied by some bemusement from the other Clangers about what the new invention actually is, how it works and what it is supposed to do.  Often the invention seems to turn out to not be quite what was wanted or has unforseen consequences.  And that approach seems to me to characterise how we often ‘do’ development.  We see a problem, we may ask users in a focus group or workshop to define their requirements, but then all too often we go and, like Major Clanger, build the product in complete isolation and then unveil it to users in what we describe as usability testing.  And all too often they go ‘yeh, that’s not quite what we had in mind’ or ‘well that would have been good when we were doing X but now we want something else’.

So how do we break that circle and solve our users problems in a better development style that builds the products that users can and will use?  That’s where I think that a more co-operative model of user engagement comes in.  It starts with a different model of user engagement, where users are involved throughout the requirements, development and testing stages.  And that’s an approach that we’ve started to call ‘co-design‘, and have piloted during our discovery research.

It starts with a Student Panel of students who agree to work with us in activities to improve library services.  We recruit cohorts of a few dozen students with a committment to carry out several activities with us during a defined period.   We outline the activity we are going to undertake and the approach we will take and make sure we have the necessary research/ethical approvals for the work.

For the discovery research we went through three stages:

  1. Requirements gathering – in this case testing a range of library search tools with a series of exercises based on typical user search activities.  This helped to identify the typical features users wanted to see, or did not want to see.  For example, at this stage, we were able to rule out using the ‘bento box’ results approach that has been popular at some other libraries
  2. Feature definition – a stage that allows you to investigate in detail some specific features – in our case we used wireframes of search box options and layouts and tested them with a number of Student panel members – ruling out tabbed search approaches and directing us much more towards a very simple search box without tabs or drop-downs.  This stage lets you test a range of different features without the expense of code development, essentially letting you refine your requirements in more detail.
  3. Development cycles – this step took the form of a sequence of build and test cycles, creating a search interface from scratch using the requirements identified in stages one and two, and then refining it, testing specific new features and discarding or retaining them depending on user reactions.  This involved working with a developer to build the site and then work through a series of development and test ‘sprints, testing features identified either in the early research or arising from each of the cycles.

These steps took us to a viable search interface and built up a pool of evidence that we used to setup and customise Primo Library Search.  That work led to further stages in engagement with users as we went through a fourth stage of usability testing the interface and making further tweaks and adjustments in the light of user reactions.  Importantly it’s an on-going process with a regular cycle of testing with users to continually improve the search tool.  The latest testing is mainly around changes to introduce new corporate branding, but includes other updates that can be made to the setup or the CSS of the site in advance of new branding being applied.

The ‘co-design’ model also fits with a more evolutionary or incremental approach to website development and is a model that usability experts such as Nielsen Norman Group often recommend as users generally want a familiar design rather than a radical redesign.  Continuous improvement systems typically expect incremental improvements as the preferred approach.  Yet the ‘co-design’ model could equally be deployed for a complete site re-design, starting from scratch with a more radical design and structural changes and then using the incremental approach to refine them into a design that meets user needs and overcomes the likely level of resistence by users familar with the old site, by delivering an improved user experience to which users can quickly get comfortable with.

Beadnell wadersFor a few months now we’ve been running a project to look at student needs from library search.  The idea behind the research is that  we know that students find library search tools to be difficult compared with Google, we know it’s a pain point.  But actually we don’t know in very much detail what it is about those tools that students find difficult, what features they really want to see in a library search tool, and what they don’t want.   So we’ve set about trying to understand more about their needs.  In this blog post I’m going to run through the approach that we are taking.  (In a later blog post hopefully I can cover some detail of the things that we are learning.)

Approach
Our overall approach is that we want to work alongside students (something that we’ve done before in our personalisation research) in a model that draws a lot of inspiration from a co-design approach. Instead of building something and then usability testing it with students at the end we want to involve students at a much earlier stage in the process so for example they can help to draw up the functional specification.

We’re fortunate in having a pool of 350 or so students who agreed to work with us for a few months on a student panel.  That means that we can invite students from the panel to take part in research or give us feedback on a small number of different activities.  Students don’t have to take part in a particular activity but being part of the panel means that they are generally pre-disposed to working with us.  So we’re getting a really good take-up of our invitations – I think that so far we had more than 30 students involved at various stages, so it gives us a good breadth of opinions from students studying  different subjects, at different study levels and with different skills and knowledge.

We’ve split the research into three different stagesDiscovery research stages: an initial stage that looked at different search scenarios and different tools; a second stage that drew out of the first phase some general features and tried them on students, then a third phase that creates a new search tool and then undertakes an iterative cycle of develop, test, develop, test and so on.  The diagram shows the sequence of the process.

The overall direction of the project is that we should have a better idea of student needs to inform the decisions we make about Discovery, about the search tools we might build or how we might setup the tools we use.

As with any research activities with students we worked with our student ethics panel to design the testing sessions and get approval for the research to take place.

Phase One
We identified six typical scenarios – (finding an article from a reference,  finding a newspaper article from a reference, searching for information on a particular subject, searching for articles on a particular topic, finding an ebook from a reference and finding the Oxford English Dictionary).   All the scenarios were drawn from activities that we ask students to do, so used the actual subjects and references that they are asked to find.  We identified eight different search tools to use in the testing  – our existing One stop search, the mobile search interface that we created during the MACON project, a beta search tool that we have on our library website, four different versions of search tools from other Universities and Google Scholar.  The tools had a mix of tabbed search, radio buttons, bento-box-style search results, chosen to introduce students to different approaches to search.

Because we are a distance learning institution, students aren’t on campus, so we set up a series of online interviews.  We were fortunate to be able to make use of the usability labs at our Institute of Educational Technology and used Teamviewer software for the online interviews.  In total we ran 18 separate sessions, with each one testing 3 scenarios in 3 different tools.  This gave us a good range of different students testing different scenarios on each of the tools.

Sessions were recorded and notes were taken so we were able to pick up on specific comments and feedback.  We also measured success rate and time taken to complete the task.  The features that students used were also recorded.  The research allowed us to see which tools students found easiest to use, which features they liked and used, and which tools didn’t work for certain scenarios.

Phase two
For the second phase we chose to concentrate on testing very specific elements of the search experience.  So for example, we looked at radio buttons and drop-down lists, and whether they should be for Author/Title/Keyword or Article/Journal title/library catalogue.  We also looked at the layout of results screens, and the display of facets, to ask students how they wanted to see date facets presented for example.Discovery search mockup

We wanted to carry out this research with some very plain wireframes to test individual features without the distraction of website designs confusing the picture.  We tend to use a wireframing tool called Balsamiq to create our wireframes rapidly and we ran through another sequence of testing, this time with a total of 9 students in a series of online interviews, again using teamviewer.

By using wireframing you can quickly create several versions of a search box or results page and put them in front of users.  It’s a good way of being able to narrow down the features that it is worth taking through to full-scale prototyping.  It’s much quicker than coding the feature and once you’ve identified the features that you want your developer to build you have a ready-made wireframe to act as a guide for the layout and features that need to be created.

Phase three
The last phase is our prototype building phase and involves taking all the research and distilling that into a set of functional requirements for our project developer to create.  In some of our projects we’ve shared the specification with students so they can agree which features they wanted to see, but with this project we had a good idea from the first two phases what features they wanted to see in a baseline search tool, so missed out that stage.  We did, however, split the functional requirements into two stages: a baseline set of requirements for the search box and the results; and then a section to capture the iterative requirements that would arise during the prototyping stage.  We aimed for a rolling-cycle of build and test although in practice we’ve setup sessions for when students are available and then gone with the latest version each time – getting students to test and refine the features and identify new features to build and test.  New features get identified and added to what is essential a product backlog (in scrum methodology/terminology).  A weekly team meeting prioritises the task for the developer to work on and we go through a rolling cycle of develop/test.

Reflections on the process
The process seems to have worked quite well.  We’ve had really good engagement from students and really good feedback that is helping us to tease out what features we need to have in any library search tool.  We’re about half way through phase three and are aiming to finish off the research for the end of July.  Our aim is to get the search tool up as a beta tool on the library website as the next step so a wider group of users can trial it.

We’ve been using Trello (http://trello.com) as a tool to help us manage the lists of tasks in the digital library/digital archive project that we’ve been running.  After looking at some of our existing tools (such as Mantis Bug Tracker for example) the team decided that they didn’t really want the detailed tracking features and didn’t feel that our standard project management tools (MS Project and the One Page Project Manager, or Outlook tasks) were quite what we needed to keep track of what is essentially a ‘product backlog‘, a list of requirements that need to be developed for the digital archive system.

Trello’s simplicity Trello desktop screenshotmakes it easy to add and organise a list of tasks and break them down into categories, with colour-coding and the ability to drag tasks around from one stream to another.  Being able to share the board across the team and assign members to the task is good.  You can also set due dates and attach files, which we’ve found useful to use to attach design and wireframe illustrations.  You can set up as many different boards as you need to so can breakdown your tasks however you want to.  The boards scroll left and right so you can go to as many columns as you need to.

We’ve been using it to group together priority tasks into a list so the team know which tasks to concentrate on, and when the tasks are done the team member can update the task message so each task can be checked and cleared off the list.

Trello ipad screenshot We’re mainly using Trello on the desktop straight from the website, although there is also an ipad app that seems to work well.  For a fairly small team with just a single developer Trello seems to work quite well.  It’s simple and easy to use and doesn’t take a lot of effort to keep up to date, it’s a practical and useful tool.   If you had a larger project you might want to use more sophisticated tools that have some ability to track progress and effort and produce burndown charts for example, but as a simple way of tracking a list of tasks to be worked on, it’s a useful project tool.

 

 

 

Harvard Library Innovation Laboratory
http://librarylab.law.harvard.edu/

The second aspect of data that caught my interest today was Harvard’s Library Innovation Laboratory.  I must admit that when I saw the link to it I did wonder whether it was going to be a list of library tools aimed directly at users (I’m sure I’ve seen the name used elsewhere recently for just such a list).  I know we are looking at redoing our library toolbox to update it and library lab or labspace sounded like a good name for something like that. But the Library Innovation Laboratory is much more interesting proof of concept for anyone with any interest in what you can do with library activity data.

Using library circulation data that has been contributed to the LibraryCloud there are some really imaginative prototype visualisations in the Stack View and Shelf Rank tools.  Two values are shown instantly.  The book width is determined by the numbers of pages in the book and the book colour corresponds to the volume of loans so the darker the blue the greater the traffic.  ShelfLife screenshot Titles are then shown as a stack one on top of each other.   It’s a really neat visualisation of the data and I’m already wondering if that approach would work equally well with visualising library data that is entirely electronic resources.  [It’s actually one of the big problems about anything to do with electronic resources – that there isn’t really a universal icon or symbol that you can use that everyone recognises that it relates to stuff that is online and in electronic form].

There’s quite a lot of interesting stuff in the site and also in the LibraryCloud site at www.librarycloud.org. One of the things that particularly interested me (from experiences with the RISE Activity Data project) was the section about data privacy and anonymisation, as a key requirement always has to be that with any dataset where the aspiration is for open release, it must be prepared in a way that ensures that users are unable to be identified individually.

The checkout visualisation is also a neat way of showing that sort of data in a nice clear fashion. Checkout screenshot The feature that lets you sort the data by different schools is useful and slightly brings to mind one of the MOSAIC competition entries that used a graph-type visualisation that allowed you to navigate through library use data.  It did amuse me though that ‘Headphones’ appears twice in the top ten with different numbers.   The perils of libraries using their Library Management Systems to loan all sorts of other things!
LibraryCloud screenshot
http://www.librarycloud.org

LibraryCloud currently has data from Harvard and Northeastern Universities and Darien, San Francisco and San Jose public libraries.  A couple of sites to keep an eye on over the next few months.

User stories imageAgile
For a new project we’ve just started we have been exploring using a set of agile methodologies.  This is to see if we can find a more flexible method of building systems than our standard approach of trying to write a comprehensive set of user requirements, functional specifications and technical specifications to cover the whole of  a new system.  

From some of the projects that we’ve done in the past we’ve recognised that there can be a risk that requirements will change through the project.  You can end up building something that at the start of the project, seemed to be exactly what everyone wanted, but by the time the project is well-advanced, you have realised that requirements have moved on.  This either leads to projects delivering something that no one really wants, or ending up with massive scope-creep and you enter a never-ending battle to keep pace with an ever-growing list of new features.  Agile development seeks to find a way out of that maze.

SCRUM and User Stories
One example of an agile methodology is SCRUM, a technique seen in software development where development phases are referred to as sprints.  So a sprint is a development activity taking place over a relatively short period of time with well-defined and potentially quite narrow objectives.  One of the techniques often used to define the user requirements is something called ‘User Stories’

A User Story is essentially as statement along the lines of:

As a … (user)

I want to…  (something)

In order to … (benefit)

The process you follow is to get your user or users to write out a series of user stories that cover the new system that they want you to build.  These user stories can encompass a range of different requirements, functional or technical for example.   Once your users have written their user stories you then take them and group them together into similar features or functions.   You can choose whether you get users to prioritise them when they write the original user stories or you can do the prioritisation with the user representative once they are put together and sorted.  The idea is that at the end of the process you have a priority list that you can use to identify what development you should undertake as part of the first sprint.

Thoughts so far
It’s the first time we’ve tried this particular method and it takes a bit of getting used to.  Writing the right sort of user stories is not as straightforward as we’d expected.  They need to be really tightly focused on what users want to do with the system, not too aspirational, and there really needs to be a boundary or scope to what they are writing User Stories about.   It also seems quite easy to miss out features of the system that are going to be needed but haven’t been mentioned in the User Stories.  But we are learning more as we go along and realise that as we progress we will create new User Stories to fill the gaps.  That seems to be one of the key aspects of agile.

Categories

Calendar

May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Creative Commons License