You are currently browsing the category archive for the ‘website’ category.
For a few months now we’ve been running a project to look at student needs from library search. The idea behind the research is that we know that students find library search tools to be difficult compared with Google, we know it’s a pain point. But actually we don’t know in very much detail what it is about those tools that students find difficult, what features they really want to see in a library search tool, and what they don’t want. So we’ve set about trying to understand more about their needs. In this blog post I’m going to run through the approach that we are taking. (In a later blog post hopefully I can cover some detail of the things that we are learning.)
Our overall approach is that we want to work alongside students (something that we’ve done before in our personalisation research) in a model that draws a lot of inspiration from a co-design approach. Instead of building something and then usability testing it with students at the end we want to involve students at a much earlier stage in the process so for example they can help to draw up the functional specification.
We’re fortunate in having a pool of 350 or so students who agreed to work with us for a few months on a student panel. That means that we can invite students from the panel to take part in research or give us feedback on a small number of different activities. Students don’t have to take part in a particular activity but being part of the panel means that they are generally pre-disposed to working with us. So we’re getting a really good take-up of our invitations – I think that so far we had more than 30 students involved at various stages, so it gives us a good breadth of opinions from students studying different subjects, at different study levels and with different skills and knowledge.
We’ve split the research into three different stages: an initial stage that looked at different search scenarios and different tools; a second stage that drew out of the first phase some general features and tried them on students, then a third phase that creates a new search tool and then undertakes an iterative cycle of develop, test, develop, test and so on. The diagram shows the sequence of the process.
The overall direction of the project is that we should have a better idea of student needs to inform the decisions we make about Discovery, about the search tools we might build or how we might setup the tools we use.
As with any research activities with students we worked with our student ethics panel to design the testing sessions and get approval for the research to take place.
We identified six typical scenarios – (finding an article from a reference, finding a newspaper article from a reference, searching for information on a particular subject, searching for articles on a particular topic, finding an ebook from a reference and finding the Oxford English Dictionary). All the scenarios were drawn from activities that we ask students to do, so used the actual subjects and references that they are asked to find. We identified eight different search tools to use in the testing – our existing One stop search, the mobile search interface that we created during the MACON project, a beta search tool that we have on our library website, four different versions of search tools from other Universities and Google Scholar. The tools had a mix of tabbed search, radio buttons, bento-box-style search results, chosen to introduce students to different approaches to search.
Because we are a distance learning institution, students aren’t on campus, so we set up a series of online interviews. We were fortunate to be able to make use of the usability labs at our Institute of Educational Technology and used Teamviewer software for the online interviews. In total we ran 18 separate sessions, with each one testing 3 scenarios in 3 different tools. This gave us a good range of different students testing different scenarios on each of the tools.
Sessions were recorded and notes were taken so we were able to pick up on specific comments and feedback. We also measured success rate and time taken to complete the task. The features that students used were also recorded. The research allowed us to see which tools students found easiest to use, which features they liked and used, and which tools didn’t work for certain scenarios.
For the second phase we chose to concentrate on testing very specific elements of the search experience. So for example, we looked at radio buttons and drop-down lists, and whether they should be for Author/Title/Keyword or Article/Journal title/library catalogue. We also looked at the layout of results screens, and the display of facets, to ask students how they wanted to see date facets presented for example.
We wanted to carry out this research with some very plain wireframes to test individual features without the distraction of website designs confusing the picture. We tend to use a wireframing tool called Balsamiq to create our wireframes rapidly and we ran through another sequence of testing, this time with a total of 9 students in a series of online interviews, again using teamviewer.
By using wireframing you can quickly create several versions of a search box or results page and put them in front of users. It’s a good way of being able to narrow down the features that it is worth taking through to full-scale prototyping. It’s much quicker than coding the feature and once you’ve identified the features that you want your developer to build you have a ready-made wireframe to act as a guide for the layout and features that need to be created.
The last phase is our prototype building phase and involves taking all the research and distilling that into a set of functional requirements for our project developer to create. In some of our projects we’ve shared the specification with students so they can agree which features they wanted to see, but with this project we had a good idea from the first two phases what features they wanted to see in a baseline search tool, so missed out that stage. We did, however, split the functional requirements into two stages: a baseline set of requirements for the search box and the results; and then a section to capture the iterative requirements that would arise during the prototyping stage. We aimed for a rolling-cycle of build and test although in practice we’ve setup sessions for when students are available and then gone with the latest version each time – getting students to test and refine the features and identify new features to build and test. New features get identified and added to what is essential a product backlog (in scrum methodology/terminology). A weekly team meeting prioritises the task for the developer to work on and we go through a rolling cycle of develop/test.
Reflections on the process
The process seems to have worked quite well. We’ve had really good engagement from students and really good feedback that is helping us to tease out what features we need to have in any library search tool. We’re about half way through phase three and are aiming to finish off the research for the end of July. Our aim is to get the search tool up as a beta tool on the library website as the next step so a wider group of users can trial it.
I’m always looking to find out about the tools and techniques that people are using to improve their websites, and particularly how they go about testing the user experience (or UX) to make sure that they can make steady improvements in their site.
So I’m a regular follower of some of the work that is going on in academic libraries in the US (e.g. Scott Young talking about A/B testing and experiments at Montana , and Matt Reidsma talking about Holistic UX). It was particularly useful to find out about the process that led to the three headings on the home page of Montana State University library, and the stages that they went through before they settled on Find, Request and Services. A step-by-step description showing the tools and techniques is a really valuable demonstration of how they went about the process and how they made their decision. It is interesting to me how frequently libraries seem not to pick the right words to describe their services, and don’t pick words that make sense to their users. But it’s really good to see an approach that largely gets users to decide on what works by testing what works, rather than asking users what they prefer.
Something else that I came across the other week was the term ‘guerilla testing’ applied to testing the usability of websites (I think that probably came from the session on ‘guerilla research’ that Martin Weller and Tony Hirst ran the other week that I caught up with via their blog posts/recording). That led on to ‘Guerilla testing‘ on the Government Service Design Manual (there’s some sense of slight irony for me about guerilla testing being documented – in some detail – in a design manual) – but the way it talks through the process, its strengths and weaknesses is really useful and it made me think about the contrast between that approach and the fairly careful and deliberate approach that we’ve been talking with our work over the last couple of months. Some things to think about.
Reflections on our approach
It’s good to get an illustration from Montana of the value of the A/B testing approach. It’s a well-evidenced and standard approach to web usability but I know that it is something that we’ve found difficult to use in a live environment as it makes our helpdesk people anxious that they aren’t clear what version of the website customers might be seeing. So we’ve tended to use step by step iterations rather than straightforward A/B testing. But something I think to revisit.
The piece of work that we’re concentrating on at the moment is to look at student needs from library search. We know it’s a pain point for students, we know it’s not ‘like Google’ and isn’t as intuitive as they feel it should be. So we’re trying to gain a better understanding of what we could do to make it a better experience (and what we shouldn’t do). So we’re working with a panel of students who want to work with the library to help create better services.
The first round tested half a dozen typical library resource search scenarios against eight different library search tools (some from us and some from elsewhere) with around twenty different users. We did all our testing as remote 1:1 sessions using Teamviewer software (although you could probably use Skype or a number of alternatives) and were able to record the sessions and have observers/note takers. We’ve assessed the success rate for each scenario against each tool and also measured the average time it took to complete each task with each tool (or the time before people gave up). These are giving us a good idea of what works and what doesn’t.
For the second round a series of wireframes were created using Balsamiq to test different variants of search boxes and results pages. So we ran a further set of tests again with the panel and again remotely. We’ve now got a pretty good idea of some of the things that look like they will work so have started to prototype a real search tool. We’re now going to be doing a series of iterative development cycles, testing tools with students and then refining them. That should greatly improve our understanding of what students want from library search and allow us to experiment with how we can build the features they want.
Most of the time my interest is about making sure that users of websites can get access to an appropriate version of the website, or that the site works on a variety of different devices. But as websites become more personalised, my version of your website might look different to your version.
But one of the other projects that I’m involved with is looking at web archiving of University websites, mainly internal ones that aren’t being captured by the Internet Archive or the UK Web Archive. And personalisation and different forms that websites can take is one of the really big challenges for capturing web sites. So I was interested to read a recent article in D-Lib Magazine ‘A method for identifying personalised representations in web archives’ by Kelly, Brunelle, Weigle and Nelson, D-Lib Magazine, November/December 2013, Vol. 19, number 11/12 doi:10.1045/november2013-kelly http://www.dlib.org/dlib/november13/kelly/11kelly.html
This article describes how the user-agent string in mobile browsers is used to serve different versions of webpages. They show some good examples from CNN of the completely different representations that you might see on iphones, desktops and android devices. The paper goes on to talk through some possible solutions to identify different versions and suggests a modification of the Wayback machine engine to allow the user to choose which versions of a user-agent you may want to view from an archive. Combined with the memento approach that offers time-based versions of a website it’s interesting to see an approach that starts to look at ways of capturing the increasingly fragmented and personalised nature of the web.
It was Lorcan Dempsey who I believe coined the term, ‘Full library discovery’ in a blog post last year. As a stage beyond ‘full collection discovery’, ‘full library discovery’ added in results drawn from LibGuides or library websites, alongside resource material from collections. So for example a search for psychology might include psychology resources, as well as help materials for those pyschology resources and contact details about the subject librarian that covers psychology. Stanford and Michigan are two examples of that approach, combining lists of relevant resources with website results.
Princeton’s new All search feature offers a similar approach, discussed in detail on their FAQ. This combines results from their Books+, Articles+, Databases, Library Website and Library Guides into a ‘bento box’ style results display. Princeton’s approach is similar to the search from North Carolina State University who I think were about the first to come up with this style.
Although in most of these cases I suspect that the underlying systems are quite different the approach is very similar. It has the advantage of being a ‘loosely-coupled’ approach where your search results page is drawn together in a ‘federated’ search method by pushing your search terms to several different systems, making use of APIs and then displaying the results in a dashboard-style layout. It has the advantage that changes to any of the underlying systems can be accommodated relatively easily, yet the display to your users stays consistent.
For me the disadvantages for this are in the lack of any overriding relevancy ranking for the material and that it perpetuates the ‘silo’ing’ of content to an extent (Books, Articles, Databases etc) which is driven largely by the underlying silos of systems that we rely on to manage that content. I’ve never been entirely convinced that users understand the distinction about what a ‘database’ might be. But the approach is probably as good as we can get until we get to truly unified resource management and more control over relevancy ranking.
Going beyond ‘full library discovery’
But ‘full library discovery’ is still very much a ‘passive’ search tool, and by that I mean that it isn’t personalised or ‘active’. At some stage to use those resources a student will be logging in to that system and that opens up an important question for me. Once you know who the user is, ‘how far should you go to provide a personalised search experience?’. You know who they are, so you could provide recommendations based on what other students studying their course have looked at (or borrowed), you might even stray into ‘learning analytics’ territory and know what the resources were that the highest achieving students looked at.
You might know what resources are on the reading list for the course that student is studying – so do you search those resources first and offer those up as they might be most relevant? You might even know what stage a student has got to in their studies and know what assignment they have to do, and what resources they need to be looking at. Do you ‘push’ those to a student?
How far do you go in assembling a profile of what might be ‘recommended’ for a course, module or assignment, what other students on the cohort might be looking at, or looked at the last time this course ran? Do you look at students previous search behaviour? How much of this might you do to build and then search some form of ‘knowledge base’ with the aim of surfacing material that is likely to be of most relevance to a student. Search for psychology on NCSU’s Search All search box gives you the top three articles out of 2,543,911 articles in Summon, and likely behaviour is not to look much beyond the first page of results. So should we be making sure that they are likely to be the most relevant ones?
But, then there’s serendipity, there’s finding the different things that you haven’t looked for before, or read before, because they are new or different. One of the issues with recommendations is the tendancy for them to be circular, ‘What gets recommended gets read’ to corrupt the performance indicator mantra. So how far do you go? ‘Mind reading search’ anyone?
I’ve definitely blogged less (24 posts in 2013 compared with 37 in 2012 and 50 in 2011), [mind you the 'death of blogging' has been announced, and here and there seem to be fewer library bloggers than in the past - so maybe blogging less is just reflecting a general trend]. Comments about blogging are suggesting that tumblr, twitter or snapchat are maybe taking people’s attention (both bloggers and readers) away from blogs. But I’m not ‘publishing’ through other channels particularly, other than occasional tweets, so that isn’t the reason for me to blog less. There has been a lot going on but that’s probably not greatly different from previous years. I think I’ve probably been to less conferences and seminars, particularly internal seminars, so that has been one area where I’ve not had as much to blog about.
To blog about something or not to blog about it
I’ve been more conscious of not blogging about some things that in previous years I probably would have blogged about. I don’t think I blogged about the Future of Technology in Education conference this year, although I have done in the past. Not particularly because it wasn’t interesting because it was, but perhaps a sense of I’ve blogged about it before and might just be repeating myself. With the exception of posts about website search and activity data I’ve not blogged so much about some of the work that I’ve been doing. So I’ve blogged very little about the digital library work although it (and the STELLAR project) were a big part of some of the interesting stuff that has been going on.
Thinking about the year ahead
I’ve never been someone that sets out predictions or new year resolutions. I’ve never been convinced that you can actually predict (and plan) too far ahead in detail without too many variables fundamentally changing those plans. There’s a quote attributed to various people along the lines that ‘no plan survives contact with the enemy’ and I’d agree with that sentiment. However much we plan we are always working with an imperfect view of the world. Circumstances change and priorities vary and you have to adapt to that. Thinking back to FOTE 2013 it was certainly interesting to hear BT’s futureologist Nicola Millard describe her main interest as being the near future and of being more a ‘soon-ologist’ than a futureologist.
What interests (intrigues perhaps) me more is less around planning but more around ‘shaping’ a future, so more change management than project management I suppose. But I think it is more than that, how do those people who carve out a new ‘reality’ go about making that change happen. Maybe it is about realising a ‘vision’ but assembling a ‘vision’ is very much the easy part of the process. Getting buy-in to a vision does seem to be something that we struggle with in a library setting.
On with 2014
Change management is high on the list for this year. We’ve done a certain amount of the ‘visioning’ to get buy-in to funding a change project. So this year we’ve work to do to procure a complete suite of new library systems (the first time I think here for 12 years or so), in a project called ‘Library Futures’ that also includes some research into student needs from library search and the construction of a ‘digital skills passport’. I’ve also got continuing work on digital libraries/archives as we move that work from development to live, alongside work with activity data, our library website and particularly work with integrating library stuff much more into a better student experience. So hopefully some interesting things to blog about. And hopefully a few new pictures to brighten up the blog (starting with a nice flower picture from Craster in the summer).
It was great to see this week that the latest opportunity on the Jisc Elevator website is one for students to pitch ideas about new technology ideas. That’s really nice to see something that involves students in coming up with ideas and backing it up with a small amount of money to kickstart things.
Using students as co-designers for library services and in particularly in relation to websites and technology is something that I’m finding more and more compelling. A lot of the credit for that goes to Matthew Reidsma from Grand Valley State University in US, whose blog ‘Good for whom?‘ is pretty much essential reading if you’re interested in usability and improving the user experience. I’m starting to see that getting students involved in co-designing services is the next logical step on from usability testing. So instead of a process where you design a system and then test it on users, you involve them from the start, by asking them what they need, maybe then getting them to feedback on solution designs and specifications and then going through the design process of prototyping, testing and iterating, by getting them to look at every stage. Something that an agile development methodology particularly lends itself to. Examples where people have started to employ students on the staff to help with getting that student ‘voice’ are also starting to appear.
There are some examples of fairly recent projects where Universities have been getting students (and others outside the institution) involved in designing services, so for example the Collaborate project at Exeter that looked at using students and employers to design ‘employability focussed assessments’. There is also Leeds Metropolitan with their PC3 project on the personalised curriculum and Manchester Metropolitan’s ‘Supporting Responsive Curricula’ project. And you can add to that list of examples the Kritikos project at Liverpool that I blogged about recently.
For us, with our focus on websites and improving the user experience we’ve been working with a group of students to help us with designing some tools for a more personalised library experience. I blogged a bit about it earlier in the year. We’re now well into that programme of work and have put together a guest blog post for Jisc’s LMS Change project blog ‘Personalisation at the Open University’. Thanks to Ben Showers from Jisc and Helen Harrop from the LMS Change project for getting that published. Credit for the work on this (and the text for the blog post) should go to my colleagues: Anne Gambles, Kirsty Baker and Keren Mills. Having identified some key features to build we are well into getting the specification for the work finalised and start building the first few features soon. It’s been an interesting first foray into working with students as co-designers and one I think has major potential for how we do things in the future.
Reading through Lown, Sierra and Boyer’s article from ACRL on ‘How Users Search the Library from a Single Search Box’ based on their work at NCSU, started me thinking about looking at some data around how people are using the single search box that we have been testing at http://www.open.ac.uk/libraryservices/beta/search/.
About three months or so ago we created a prototype tool that pulls together results from the Discovery product we use (EBSCO Discovery) alongside results from the resources database that we use to feed the Library Resources pages on the library website, and including pages from the library website. Each result is shown in a box (ala ‘bento box’) and they are just listed down the screen, with Exact Title Matches and Title Matches being shown at the top, followed by a list of Databases, Library Pages, Ebooks, Ejournals and then articles from EBSCO Discovery. It was done in a deliberately simple way without lots of extra options to manipulate or refine the lists so we could get some very early views about how useful it was as an approach.
Looking at the data from Google Analytics, we’ve had just over 2,000 page views over the three months. There’s a spread of more than 800 different searches with the majority (less than 10%) being repeated fewer than 6 times. I’d suspect that most of those repeated terms are ones where people have been testing the tool.
The data also allows us to pick up when people are doing a search and then choosing to look at more data from one of the ‘bento boxes’, effectively they do this by applying a filter to the search string, e.g. (&Filter=EBOOK) takes you to all the Ebook resources that match your original search term. So 160 of the 2,000 page views were for Ebooks (8%) and 113 f0r Ejournals (6%) for example.
When it comes to looking at the actual search terms then they are overwhelmingly ‘subject’ type searches, with very few journal articles or author names in the search string. There are a few more journal or database names such as Medline or Web of Science But otherwise there is a very wide variety of search terms being employed and it very quickly gets down to single figure frequency. The wordle word cloud at the top of the page shows the range of search terms used in the last three months.
We’ve more work to do to look in more detail about what people want to do but being able to look at the search terms that people use and see how they filter their results is quite useful. Next steps are to do a bit more digging into Google Analytics to see what other useful data can be gleaned about what users are doing in the prototype.
Infographics and data visualisations seem to be very popular at the moment and for a while I’ve been keeping an eye on visual.ly as they have some great infographics and data visualisations. One of the good things about the visual.ly infographics is that there is some scope to customise them. So for example there is one about the ‘Life of a hashtag’ that you can customise and several others around facebook and twitter that you can use.
I picked up on twitter the other week that they had just brought out a Google Analytics infographic. That immediately got my interest as we make a lot of use of GA. You just point it to your site through your Google Analytics account and then get a weekly email ‘Your weekly insights’ created dynamically from your Google Analytics data.
It’s a very neat idea and quite a useful promotional tool to give people a quick snapshot of what is going on. So you get Pageviews over the past three weeks, what the trends are for New and Returning Visitors and reports on Pages per visit and Time on site and how that has changed in the past week.
It’s quite useful for social media traffic showing how facebook and twitter traffic has changed over the past week and as these types of media are things that you often want quite quick feedback on it is a nice visual way of being able to show what difference a particular activity might have had.
Obviously as a free tool, there’s a limit to the customisation you can do. So it might be nice to have visits or unique visitors to measure change in use of the site, or your top referrals, or particular pages that have been used most frequently. The time period is something that possibly makes it less useful for me in that I’m more likely to be want to compare against the previous month (or even this month last year). But no doubt visual.ly would build a custom version for you if you wanted something particular.
But as a freely available tool it’s a useful thing to have. The infographic is nicely presented and gives a visually appealing presentation of analytics data that can often be difficult to present to audiences who don’t necessarily understand the intricacies of web analytics.
The Google Analytics Visual.ly infographic is at https://create.visual.ly/graphic/google-analytics/
New tools concept
Earlier in the week we soft-launched a new section on our library website. The New Tools section is a space where we can put out new ideas with the aim of trying to get some feedback about whether users will find them useful. This parallels the work we’re also doing with a group of students from our Student Panel to work with them to design some new features (blogged about earlier in the week).
Our idea is that we’d use the New Tools section to put up beta tools based on ideas that have come up through a number of ways. So the ideas that come through the personalisation study work with students will go through a private ‘alpha’ stage where they help with defining the ideas and feeding back on paper prototypes and ‘proof-of-concept’ tools. Once the tools have been refined the best ones get released as ‘beta’ versions through the New Tools section. We’d also look at releasing as beta tools some of the ideas that come from other work we’ve done in the past such as in the RISE recommender project and other ideas we’ve come up with.
The idea with the New tools section is that the tools aren’t fully supported but are there for people to try and let us know what they think about them. If they work then we can refine them and take them into service. If they aren’t useful then we’ll have a better idea of what people want and what they don’t.
First new tools – single search box
The first two tools that we’ve made available in beta are both around library resources. The first one is a single search box (I”ve written before about the library quest for the google-like search box – and I’m starting to get more interested in the Google-like search box actually being Google and that libraries might be better concentrating on helping users in Google find library resources that they are entitled to access – but Google’s decision to ‘retire’ Google Reader certainly gives me pause in relying too much on something from Google). Behind the search box is a search that passes your search string to our version of EBSCO Discovery (using their API) and also to the library resources database that powers the resource lists that are fed into the library website. The idea behind this is that it will bring together results from our various systems into one place and particularly that it will be better at finding Journal titles that are direct matches.
This single search box is designed to also test the feasibility of bringing together different search results into a single interface. It’s a bit federated-search-like in that the results are presented in separate boxes (sort of like a stacked bento-box approach inspired by Stanford and others – it’s interesting also to see the approach that Princeton have taken with their beta version of their library website). We also haven’t strayed too much into the area of adding some of the surrounding functionality (saving citations, sharing etc features) that a fully-fledged system would need. This is just about testing whether pulling together these results is a workable and useful thing to do.
First new tools – My recent resources
The second tool is about trying to see if giving users access to a list of library resources they have recently accessed is useful to them. If you’re not an OU user (or aren’t signed-in) you’ll just see a demonstration list of resources. But if you are signed-in you should see a list of the resources you’ve used, with the most recent ones first. These resources will include ones you’ve looked at directly from the library website, or ones articles that you’ve viewed through our One Stop search discovery system. For this prototype we have offered RSS and RIS formats to export your records so you can put them into your favourite reference management tool. We’ve also included a box on the right to list your most used resources, with the number of times in brackets.
The format and description of the entries just picks up the standard format we already use on the library website and we’ve started to add in book covers for ebooks (although that gets me thinking that I’ve never really worked out what the point is of a book cover for an ebook anyway – Kindles seem to take you to the start of the book, not to the cover, so maybe ebook covers aren’t that relevant anymore – but in any case it breaks up the blocks of text neatly).
The plan is to develop more prototypes and build up a pool of tools in this space that we can get people to look at and comment on. Hopefully it will be useful to people,
Although I’d picked up the growth of mobiles and tablets overtaking sales of desktop PCs and laptops, one thing that hadn’t become obvious to me was that we now seem to be approaching the time when the number of tablets/smartphones in circulation outnumbers the numbers of desktops/laptops. December’s Internet Trends survey from Kleiner Perkins Caufield Byers shows, in the graph reproduced here, that they’d expect that stage to be reached globally sometime this year.
Although I’d probably have a couple of caveats about smartphone adoption in the developing world slightly skewing the figures, and whether people might ordinarily have more tablets/smartphones than desktops/laptops, it nonetheless emphasises the point that mobile internet access is now mainstream. For many people it may be their preferred means of accessing your services and their expectation is going to be that it should just work, and give an equivalent or better experience than the ‘traditional’ desktop browser experience.
But numbers of devices doesn’t yet map to the amount of usage of our websites. For us our traffic is still under 10% from mobiles/tablets, so even if the numbers of devices in circulation is reaching parity, we aren’t yet at a stage where the majority of our use is coming from those devices. But looking at the trends, that day is on the horizon maybe.
One of the interesting concepts in KPCB’s slideshow is the ‘asset-light’ idea. The idea that more and more people, perhaps younger people especially, may be less inclined to wanting to own or acquire physical ‘stuff’ and have a more ‘mobile’ (as in being able to move more readily) lifestyle. Characterised as having your music on spotify or iTunes rather than on physical CDs, or renting rather than buying your textbooks. It also has in mind for me a personal version of the concept of ‘Just-in-time’ the production strategy based around reducing inventory in favour of delivery of items when you need them. It’s the concept of ‘on-demand’ rather than ownership ‘just-in-case’.
Potentially, as characterised in this blogpost on Fail!lab it might mean major changes to our library websites, or even the concept of websites. It’s a good and interesting thought. For a while we’ve certainly been pushing content into places where students go, such as pushing library resources via RSS feeds into our VLE. But these spaces are still websites. Yet once you’ve got a stream or feed of data then you could push or pull it into numerous places, whether apps or webpages or systems.
The idea in the Fail!Lab blogpost around Artificially intelligent agents doing the ‘heavy-lifting’ of finding resources for users is something that Paul Walk raised as part of his Library Management Systems vision (slideshare and blog post) so it’s interesting to see someone else postulating a similar future. For me it starts to envisage a future where users choose their environment/tools/agents and we build systems that are capable of feeding data/content to those agents and are built to a set of data sharing standards. It suggests a time where users are able to write queries to interrogate your systems, whether for content or for help materials or skills development activity, and implies a world of profiles, entitlements and charging mechanisms that are a world away from the current model of – go to this website, signup and pass through the gateway into a ‘library’ of stuff.