You are currently browsing the category archive for the ‘website’ category.
I’m not sure how many people will be familar with the work of Oliver Postgate, and specifically of his stop-motion animation series, The Clangers. One of the characters in the series is Major Clanger, and he’s an inventor.
The character always comes to mind to me when thinking about development approaches as an example of a typical approach to user engagement. So the scene opens with a problem presenting itself. Major Clanger sees the problem and thinks he has an idea to solve it, so he then disappears off and shuts himself away in his cave. Cue lots of banging and noises as Major Clanger is busy inventing a device to solve ‘the problem’. Then comes the great unveilling of the invention, often accompanied by some bemusement from the other Clangers about what the new invention actually is, how it works and what it is supposed to do. Often the invention seems to turn out to not be quite what was wanted or has unforseen consequences. And that approach seems to me to characterise how we often ‘do’ development. We see a problem, we may ask users in a focus group or workshop to define their requirements, but then all too often we go and, like Major Clanger, build the product in complete isolation and then unveil it to users in what we describe as usability testing. And all too often they go ‘yeh, that’s not quite what we had in mind’ or ‘well that would have been good when we were doing X but now we want something else’.
So how do we break that circle and solve our users problems in a better development style that builds the products that users can and will use? That’s where I think that a more co-operative model of user engagement comes in. It starts with a different model of user engagement, where users are involved throughout the requirements, development and testing stages. And that’s an approach that we’ve started to call ‘co-design‘, and have piloted during our discovery research.
It starts with a Student Panel of students who agree to work with us in activities to improve library services. We recruit cohorts of a few dozen students with a committment to carry out several activities with us during a defined period. We outline the activity we are going to undertake and the approach we will take and make sure we have the necessary research/ethical approvals for the work.
For the discovery research we went through three stages:
- Requirements gathering – in this case testing a range of library search tools with a series of exercises based on typical user search activities. This helped to identify the typical features users wanted to see, or did not want to see. For example, at this stage, we were able to rule out using the ‘bento box’ results approach that has been popular at some other libraries
- Feature definition – a stage that allows you to investigate in detail some specific features – in our case we used wireframes of search box options and layouts and tested them with a number of Student panel members – ruling out tabbed search approaches and directing us much more towards a very simple search box without tabs or drop-downs. This stage lets you test a range of different features without the expense of code development, essentially letting you refine your requirements in more detail.
- Development cycles – this step took the form of a sequence of build and test cycles, creating a search interface from scratch using the requirements identified in stages one and two, and then refining it, testing specific new features and discarding or retaining them depending on user reactions. This involved working with a developer to build the site and then work through a series of development and test ‘sprints, testing features identified either in the early research or arising from each of the cycles.
These steps took us to a viable search interface and built up a pool of evidence that we used to setup and customise Primo Library Search. That work led to further stages in engagement with users as we went through a fourth stage of usability testing the interface and making further tweaks and adjustments in the light of user reactions. Importantly it’s an on-going process with a regular cycle of testing with users to continually improve the search tool. The latest testing is mainly around changes to introduce new corporate branding, but includes other updates that can be made to the setup or the CSS of the site in advance of new branding being applied.
The ‘co-design’ model also fits with a more evolutionary or incremental approach to website development and is a model that usability experts such as Nielsen Norman Group often recommend as users generally want a familiar design rather than a radical redesign. Continuous improvement systems typically expect incremental improvements as the preferred approach. Yet the ‘co-design’ model could equally be deployed for a complete site re-design, starting from scratch with a more radical design and structural changes and then using the incremental approach to refine them into a design that meets user needs and overcomes the likely level of resistence by users familar with the old site, by delivering an improved user experience to which users can quickly get comfortable with.
The digital archive site that we’ve been working away on for a while now is finally public. It is being given a very low-key soft launch to give time for more testing and checking to make sure that the features work OK for users, but as it has now been tweeted about, is linked from our main library website and findable on Google, then I can finally write a short piece about it.
The site has gone live with a mix of images, some videos about the university and a small collection of video clips from the first science module in the 1970s. Accompanying the images and videos are a couple of sub-sites we’ve called Exhbitions. To start with there are two, one covering the teaching of Shakespeare and the other giving a potted history of the university. The exhibitions are designed to give a bit more context around some of the material in the collection.
The small collection of 160 historical images from the history of the university include people involved in the development of the university or significant events such as the first graduation ceremony, as well as a selection of images about the construction of the campus. The latter is slightly odd maybe for a distance learning institution, with a campus that most students may never see, but maybe that makes the changes to the physical enviroment of interest to students and the general viewer nonetheless.
The selection of videos include a collection of thirty programmes about the university mostly from the 1970s and 1980s and mainly from a magazine-style series called Open Forum, giving students a bit of an insight into the life of the university. It includes sections from various University officials, but also student experiences, Summer schools and the like. Some of the videos cover events such as royal visits and material about the history of the university.
Less obvious to the casual browser is the inclusion of a large collection of metadata about university courses. This metadata profile forms a skeleton or scaffolding that is used to hang the bits of digitised course materials together and relate them to their parent course/module. So it gives a way of displaying the different types of material included in a module together as well as giving information about the module, its subjects and when it ran. At the moment there are only a few digitised samples hanging on the underlying bare bones.
To find the metadata go to the View All tab, make sure the ‘Available online’ button isn’t selected and choose ‘Module overview’ from Content Type, and it’s possible to browse through some details of the university’s old modules, seeing some information about the module, when they were run. You can also follow through to the linked data repository at data.open.ac.uk e.g. http://data.open.ac.uk/page/course/e242 Underpinning this aspect of the site is a semantic web RDF triplestore.
Public and staff sites
One of the challenges for the digital archive is that it is essentially two different sites under the skin. A staff version of the site has been available internally for over a year and lets staff login to see a broader range of material, particularly from old university course materials. So staff can access some sound recordings as well as a small number of digitised books, and access a larger collection of videos, although at this stage it’s still a fairly small proportion of the overall archive. But more will be added over time as well as hopefully some of the several hundred module websites that have been archived over the past three years.
Unlike many digital archives all of the content is relatively recent, i.e. less than fifty years old. And that gives a different set of challenges as there is a lot of content that would need to have Intellectual Property rights cleared before it could be made openly available. So there are a small number of clips but at the moment limited amounts of course materials that have been able to be made open. So one of the challenges will be to find ways to fund making more material open, both in terms of the effort needed to digitise and check material and the cost of payments to any rights holders.
The digital archive can be found at www.open.ac.uk/library/digital-archive
One of the interesting features of our new library game OpenTree for me is that it is possible to engage with it in a few different ways. Although at one level it’s about a game, with points and badges for interacting with the game and with library content, resources and webpages. It’s social so you can connect with other people and review and share resources.
But, as a user you can choose the extent that you want to share. So you can choose to share your activity with all users in OpenTree, or restrict it so only your friends can see your activity, or choose to keep your activity private. You can also choose whether or not things you highlight are made public.
So you’d wonder what value you’d get in using it if you make your activity entirely private. But you can use it as a way of tracking which library resources you are using. And you can organise them by tagging them and writing notes about them so you’ve got a record of the resources you used for a particular assignment. You might want to keep your activity private if you’re writing a paper and don’t want to share your sources or if you aren’t so keen on social aspects.
If you share your activities with friends and maybe connect with people studying the same module as you, then you could see some value in sharing useful resources with fellow students you might not meet otherwise. In a distance-learning institution with potentially hundreds of students studying your module, students might meet a few students in local tutorials or on module forums but might never connect with most people following the same pathway as themselves.
And some people will be happy to share, will want to get engaged with all the social aspects and the gaming aspects of OpenTree. It will be really interesting to see how users get to grips with OpenTree and what they make of it and to hear how people are using it.
It will particularly be interesting to see how our users engagement with it might differ from versions at bricks-and-mortar Universities at Huddersfield, Glasgow and Manchester. OpenTree’s focus is online and digital so doesn’t include loans and library visits, and our users are often older, studying part-time and not campus-based.
In early feedback, we’re already seeing a sense that some of the game aspects, such as the Subject leaderboard is of less interest than expected. Maybe that reflects students being focused around outcomes much more, although research seems to suggest (Tomlinson 2014 ‘Exploring the impact of policy changes on students’ attitudes and approaches to learning in higher education’ HEA) that this isn’t just a factor for part-time and distance-learning students as a result of increased university fees and student loans. It might also be that because we haven’t gone for an individual leaderboard that there’s less personal investment, or just that users aren’t so sure what it represents.
One of the projects that we’ve been working on as part of our Library Futures programme has been a product called OpenTree. OpenTree is based on the Librarygame software from a small development team at ‘Running in the Halls’. Librarygame adds gaming and social aspects to student engagement with library services.
Librarygame was developed originally as Lemontree for Huddersfield University (https://library.hud.ac.uk/lemontree/) and then updated and adopted as librarytree and BookedIn for Glasgow and Manchester Universities respectively (https://librarytree.gla.ac.uk/ and https://bookedin.manchester.ac.uk/).
Being originally based around engagement with physcial libraries and taking data from library loans from the library management system, or from physical library visits, via building access logs, the basic game model had to change a bit for a distance-learning University where students don’t visit the University library or borrow books.
OpenTree gives users points for accessing resources and points build up into levels in the game. Activities such as making friends, reviewing, tagging and sharing resources also get you badges in the game. We’ve also added in a Challenges section to highlight activities to encourage users to try out different things, trying Being Digital, for example.
Because it lists library resources you’ve been accessing I’ve already been finding it useful as a way of organising and remembering library resources I’ve been using, so we’re hopeful that students will also find it useful and really get into the social aspects.
OpenTree launches to students in the autumn but is up-and-running in beta now. A video introducing OpenTree is on YouTube at: https://www.youtube.com/watch?v=yeSU0FwVNvU
We’re really looking forward to seeing how students get on with OpenTree and already have a few thoughts about enhancements and developments, and no doubt other ideas will come up once more people start using it.
For a few months now we’ve been running a project to look at student needs from library search. The idea behind the research is that we know that students find library search tools to be difficult compared with Google, we know it’s a pain point. But actually we don’t know in very much detail what it is about those tools that students find difficult, what features they really want to see in a library search tool, and what they don’t want. So we’ve set about trying to understand more about their needs. In this blog post I’m going to run through the approach that we are taking. (In a later blog post hopefully I can cover some detail of the things that we are learning.)
Our overall approach is that we want to work alongside students (something that we’ve done before in our personalisation research) in a model that draws a lot of inspiration from a co-design approach. Instead of building something and then usability testing it with students at the end we want to involve students at a much earlier stage in the process so for example they can help to draw up the functional specification.
We’re fortunate in having a pool of 350 or so students who agreed to work with us for a few months on a student panel. That means that we can invite students from the panel to take part in research or give us feedback on a small number of different activities. Students don’t have to take part in a particular activity but being part of the panel means that they are generally pre-disposed to working with us. So we’re getting a really good take-up of our invitations – I think that so far we had more than 30 students involved at various stages, so it gives us a good breadth of opinions from students studying different subjects, at different study levels and with different skills and knowledge.
We’ve split the research into three different stages: an initial stage that looked at different search scenarios and different tools; a second stage that drew out of the first phase some general features and tried them on students, then a third phase that creates a new search tool and then undertakes an iterative cycle of develop, test, develop, test and so on. The diagram shows the sequence of the process.
The overall direction of the project is that we should have a better idea of student needs to inform the decisions we make about Discovery, about the search tools we might build or how we might setup the tools we use.
As with any research activities with students we worked with our student ethics panel to design the testing sessions and get approval for the research to take place.
We identified six typical scenarios – (finding an article from a reference, finding a newspaper article from a reference, searching for information on a particular subject, searching for articles on a particular topic, finding an ebook from a reference and finding the Oxford English Dictionary). All the scenarios were drawn from activities that we ask students to do, so used the actual subjects and references that they are asked to find. We identified eight different search tools to use in the testing – our existing One stop search, the mobile search interface that we created during the MACON project, a beta search tool that we have on our library website, four different versions of search tools from other Universities and Google Scholar. The tools had a mix of tabbed search, radio buttons, bento-box-style search results, chosen to introduce students to different approaches to search.
Because we are a distance learning institution, students aren’t on campus, so we set up a series of online interviews. We were fortunate to be able to make use of the usability labs at our Institute of Educational Technology and used Teamviewer software for the online interviews. In total we ran 18 separate sessions, with each one testing 3 scenarios in 3 different tools. This gave us a good range of different students testing different scenarios on each of the tools.
Sessions were recorded and notes were taken so we were able to pick up on specific comments and feedback. We also measured success rate and time taken to complete the task. The features that students used were also recorded. The research allowed us to see which tools students found easiest to use, which features they liked and used, and which tools didn’t work for certain scenarios.
For the second phase we chose to concentrate on testing very specific elements of the search experience. So for example, we looked at radio buttons and drop-down lists, and whether they should be for Author/Title/Keyword or Article/Journal title/library catalogue. We also looked at the layout of results screens, and the display of facets, to ask students how they wanted to see date facets presented for example.
We wanted to carry out this research with some very plain wireframes to test individual features without the distraction of website designs confusing the picture. We tend to use a wireframing tool called Balsamiq to create our wireframes rapidly and we ran through another sequence of testing, this time with a total of 9 students in a series of online interviews, again using teamviewer.
By using wireframing you can quickly create several versions of a search box or results page and put them in front of users. It’s a good way of being able to narrow down the features that it is worth taking through to full-scale prototyping. It’s much quicker than coding the feature and once you’ve identified the features that you want your developer to build you have a ready-made wireframe to act as a guide for the layout and features that need to be created.
The last phase is our prototype building phase and involves taking all the research and distilling that into a set of functional requirements for our project developer to create. In some of our projects we’ve shared the specification with students so they can agree which features they wanted to see, but with this project we had a good idea from the first two phases what features they wanted to see in a baseline search tool, so missed out that stage. We did, however, split the functional requirements into two stages: a baseline set of requirements for the search box and the results; and then a section to capture the iterative requirements that would arise during the prototyping stage. We aimed for a rolling-cycle of build and test although in practice we’ve setup sessions for when students are available and then gone with the latest version each time – getting students to test and refine the features and identify new features to build and test. New features get identified and added to what is essential a product backlog (in scrum methodology/terminology). A weekly team meeting prioritises the task for the developer to work on and we go through a rolling cycle of develop/test.
Reflections on the process
The process seems to have worked quite well. We’ve had really good engagement from students and really good feedback that is helping us to tease out what features we need to have in any library search tool. We’re about half way through phase three and are aiming to finish off the research for the end of July. Our aim is to get the search tool up as a beta tool on the library website as the next step so a wider group of users can trial it.
I’m always looking to find out about the tools and techniques that people are using to improve their websites, and particularly how they go about testing the user experience (or UX) to make sure that they can make steady improvements in their site.
So I’m a regular follower of some of the work that is going on in academic libraries in the US (e.g. Scott Young talking about A/B testing and experiments at Montana , and Matt Reidsma talking about Holistic UX). It was particularly useful to find out about the process that led to the three headings on the home page of Montana State University library, and the stages that they went through before they settled on Find, Request and Services. A step-by-step description showing the tools and techniques is a really valuable demonstration of how they went about the process and how they made their decision. It is interesting to me how frequently libraries seem not to pick the right words to describe their services, and don’t pick words that make sense to their users. But it’s really good to see an approach that largely gets users to decide on what works by testing what works, rather than asking users what they prefer.
Something else that I came across the other week was the term ‘guerilla testing’ applied to testing the usability of websites (I think that probably came from the session on ‘guerilla research’ that Martin Weller and Tony Hirst ran the other week that I caught up with via their blog posts/recording). That led on to ‘Guerilla testing‘ on the Government Service Design Manual (there’s some sense of slight irony for me about guerilla testing being documented – in some detail – in a design manual) – but the way it talks through the process, its strengths and weaknesses is really useful and it made me think about the contrast between that approach and the fairly careful and deliberate approach that we’ve been talking with our work over the last couple of months. Some things to think about.
Reflections on our approach
It’s good to get an illustration from Montana of the value of the A/B testing approach. It’s a well-evidenced and standard approach to web usability but I know that it is something that we’ve found difficult to use in a live environment as it makes our helpdesk people anxious that they aren’t clear what version of the website customers might be seeing. So we’ve tended to use step by step iterations rather than straightforward A/B testing. But something I think to revisit.
The piece of work that we’re concentrating on at the moment is to look at student needs from library search. We know it’s a pain point for students, we know it’s not ‘like Google’ and isn’t as intuitive as they feel it should be. So we’re trying to gain a better understanding of what we could do to make it a better experience (and what we shouldn’t do). So we’re working with a panel of students who want to work with the library to help create better services.
The first round tested half a dozen typical library resource search scenarios against eight different library search tools (some from us and some from elsewhere) with around twenty different users. We did all our testing as remote 1:1 sessions using Teamviewer software (although you could probably use Skype or a number of alternatives) and were able to record the sessions and have observers/note takers. We’ve assessed the success rate for each scenario against each tool and also measured the average time it took to complete each task with each tool (or the time before people gave up). These are giving us a good idea of what works and what doesn’t.
For the second round a series of wireframes were created using Balsamiq to test different variants of search boxes and results pages. So we ran a further set of tests again with the panel and again remotely. We’ve now got a pretty good idea of some of the things that look like they will work so have started to prototype a real search tool. We’re now going to be doing a series of iterative development cycles, testing tools with students and then refining them. That should greatly improve our understanding of what students want from library search and allow us to experiment with how we can build the features they want.
Most of the time my interest is about making sure that users of websites can get access to an appropriate version of the website, or that the site works on a variety of different devices. But as websites become more personalised, my version of your website might look different to your version.
But one of the other projects that I’m involved with is looking at web archiving of University websites, mainly internal ones that aren’t being captured by the Internet Archive or the UK Web Archive. And personalisation and different forms that websites can take is one of the really big challenges for capturing web sites. So I was interested to read a recent article in D-Lib Magazine ‘A method for identifying personalised representations in web archives’ by Kelly, Brunelle, Weigle and Nelson, D-Lib Magazine, November/December 2013, Vol. 19, number 11/12 doi:10.1045/november2013-kelly http://www.dlib.org/dlib/november13/kelly/11kelly.html
This article describes how the user-agent string in mobile browsers is used to serve different versions of webpages. They show some good examples from CNN of the completely different representations that you might see on iphones, desktops and android devices. The paper goes on to talk through some possible solutions to identify different versions and suggests a modification of the Wayback machine engine to allow the user to choose which versions of a user-agent you may want to view from an archive. Combined with the memento approach that offers time-based versions of a website it’s interesting to see an approach that starts to look at ways of capturing the increasingly fragmented and personalised nature of the web.
It was Lorcan Dempsey who I believe coined the term, ‘Full library discovery’ in a blog post last year. As a stage beyond ‘full collection discovery’, ‘full library discovery’ added in results drawn from LibGuides or library websites, alongside resource material from collections. So for example a search for psychology might include psychology resources, as well as help materials for those pyschology resources and contact details about the subject librarian that covers psychology. Stanford and Michigan are two examples of that approach, combining lists of relevant resources with website results.
Princeton’s new All search feature offers a similar approach, discussed in detail on their FAQ. This combines results from their Books+, Articles+, Databases, Library Website and Library Guides into a ‘bento box’ style results display. Princeton’s approach is similar to the search from North Carolina State University who I think were about the first to come up with this style.
Although in most of these cases I suspect that the underlying systems are quite different the approach is very similar. It has the advantage of being a ‘loosely-coupled’ approach where your search results page is drawn together in a ‘federated’ search method by pushing your search terms to several different systems, making use of APIs and then displaying the results in a dashboard-style layout. It has the advantage that changes to any of the underlying systems can be accommodated relatively easily, yet the display to your users stays consistent.
For me the disadvantages for this are in the lack of any overriding relevancy ranking for the material and that it perpetuates the ‘silo’ing’ of content to an extent (Books, Articles, Databases etc) which is driven largely by the underlying silos of systems that we rely on to manage that content. I’ve never been entirely convinced that users understand the distinction about what a ‘database’ might be. But the approach is probably as good as we can get until we get to truly unified resource management and more control over relevancy ranking.
Going beyond ‘full library discovery’
But ‘full library discovery’ is still very much a ‘passive’ search tool, and by that I mean that it isn’t personalised or ‘active’. At some stage to use those resources a student will be logging in to that system and that opens up an important question for me. Once you know who the user is, ‘how far should you go to provide a personalised search experience?’. You know who they are, so you could provide recommendations based on what other students studying their course have looked at (or borrowed), you might even stray into ‘learning analytics’ territory and know what the resources were that the highest achieving students looked at.
You might know what resources are on the reading list for the course that student is studying – so do you search those resources first and offer those up as they might be most relevant? You might even know what stage a student has got to in their studies and know what assignment they have to do, and what resources they need to be looking at. Do you ‘push’ those to a student?
How far do you go in assembling a profile of what might be ‘recommended’ for a course, module or assignment, what other students on the cohort might be looking at, or looked at the last time this course ran? Do you look at students previous search behaviour? How much of this might you do to build and then search some form of ‘knowledge base’ with the aim of surfacing material that is likely to be of most relevance to a student. Search for psychology on NCSU’s Search All search box gives you the top three articles out of 2,543,911 articles in Summon, and likely behaviour is not to look much beyond the first page of results. So should we be making sure that they are likely to be the most relevant ones?
But, then there’s serendipity, there’s finding the different things that you haven’t looked for before, or read before, because they are new or different. One of the issues with recommendations is the tendancy for them to be circular, ‘What gets recommended gets read’ to corrupt the performance indicator mantra. So how far do you go? ‘Mind reading search’ anyone?
I’ve definitely blogged less (24 posts in 2013 compared with 37 in 2012 and 50 in 2011), [mind you the ‘death of blogging’ has been announced, and here and there seem to be fewer library bloggers than in the past – so maybe blogging less is just reflecting a general trend]. Comments about blogging are suggesting that tumblr, twitter or snapchat are maybe taking people’s attention (both bloggers and readers) away from blogs. But I’m not ‘publishing’ through other channels particularly, other than occasional tweets, so that isn’t the reason for me to blog less. There has been a lot going on but that’s probably not greatly different from previous years. I think I’ve probably been to less conferences and seminars, particularly internal seminars, so that has been one area where I’ve not had as much to blog about.
To blog about something or not to blog about it
I’ve been more conscious of not blogging about some things that in previous years I probably would have blogged about. I don’t think I blogged about the Future of Technology in Education conference this year, although I have done in the past. Not particularly because it wasn’t interesting because it was, but perhaps a sense of I’ve blogged about it before and might just be repeating myself. With the exception of posts about website search and activity data I’ve not blogged so much about some of the work that I’ve been doing. So I’ve blogged very little about the digital library work although it (and the STELLAR project) were a big part of some of the interesting stuff that has been going on.
Thinking about the year ahead
I’ve never been someone that sets out predictions or new year resolutions. I’ve never been convinced that you can actually predict (and plan) too far ahead in detail without too many variables fundamentally changing those plans. There’s a quote attributed to various people along the lines that ‘no plan survives contact with the enemy’ and I’d agree with that sentiment. However much we plan we are always working with an imperfect view of the world. Circumstances change and priorities vary and you have to adapt to that. Thinking back to FOTE 2013 it was certainly interesting to hear BT’s futureologist Nicola Millard describe her main interest as being the near future and of being more a ‘soon-ologist’ than a futureologist.
What interests (intrigues perhaps) me more is less around planning but more around ‘shaping’ a future, so more change management than project management I suppose. But I think it is more than that, how do those people who carve out a new ‘reality’ go about making that change happen. Maybe it is about realising a ‘vision’ but assembling a ‘vision’ is very much the easy part of the process. Getting buy-in to a vision does seem to be something that we struggle with in a library setting.
On with 2014
Change management is high on the list for this year. We’ve done a certain amount of the ‘visioning’ to get buy-in to funding a change project. So this year we’ve work to do to procure a complete suite of new library systems (the first time I think here for 12 years or so), in a project called ‘Library Futures’ that also includes some research into student needs from library search and the construction of a ‘digital skills passport’. I’ve also got continuing work on digital libraries/archives as we move that work from development to live, alongside work with activity data, our library website and particularly work with integrating library stuff much more into a better student experience. So hopefully some interesting things to blog about. And hopefully a few new pictures to brighten up the blog (starting with a nice flower picture from Craster in the summer).
It was great to see this week that the latest opportunity on the Jisc Elevator website is one for students to pitch ideas about new technology ideas. That’s really nice to see something that involves students in coming up with ideas and backing it up with a small amount of money to kickstart things.
Using students as co-designers for library services and in particularly in relation to websites and technology is something that I’m finding more and more compelling. A lot of the credit for that goes to Matthew Reidsma from Grand Valley State University in US, whose blog ‘Good for whom?‘ is pretty much essential reading if you’re interested in usability and improving the user experience. I’m starting to see that getting students involved in co-designing services is the next logical step on from usability testing. So instead of a process where you design a system and then test it on users, you involve them from the start, by asking them what they need, maybe then getting them to feedback on solution designs and specifications and then going through the design process of prototyping, testing and iterating, by getting them to look at every stage. Something that an agile development methodology particularly lends itself to. Examples where people have started to employ students on the staff to help with getting that student ‘voice’ are also starting to appear.
There are some examples of fairly recent projects where Universities have been getting students (and others outside the institution) involved in designing services, so for example the Collaborate project at Exeter that looked at using students and employers to design ’employability focussed assessments’. There is also Leeds Metropolitan with their PC3 project on the personalised curriculum and Manchester Metropolitan’s ‘Supporting Responsive Curricula’ project. And you can add to that list of examples the Kritikos project at Liverpool that I blogged about recently.
For us, with our focus on websites and improving the user experience we’ve been working with a group of students to help us with designing some tools for a more personalised library experience. I blogged a bit about it earlier in the year. We’re now well into that programme of work and have put together a guest blog post for Jisc’s LMS Change project blog ‘Personalisation at the Open University’. Thanks to Ben Showers from Jisc and Helen Harrop from the LMS Change project for getting that published. Credit for the work on this (and the text for the blog post) should go to my colleagues: Anne Gambles, Kirsty Baker and Keren Mills. Having identified some key features to build we are well into getting the specification for the work finalised and start building the first few features soon. It’s been an interesting first foray into working with students as co-designers and one I think has major potential for how we do things in the future.