You are currently browsing the monthly archive for October 2015.
So, we’re at the start of a new project and I thought it was a useful time to reflect on the range of tools we’re using in the early stages of the project for collaboration and project management. These tools cover communication, project management, task management and bibliographic management.
For small projects we’re using the One Page Project Plan, an excel template from www.oppmi.com This uses a single excel worksheet to cover tasks, progress, responsibility and accountability and also some confidence measures about how the project is progressing. We’ve used this fairly consistently for two or three years for our small projects and people are pretty familiar with not only how to use them for projects but also how to read and interpret them. You can only really get about 25-30 tasks onto the OPPP, so it will be used to track activities at a relatively high level although we can reflect both the work-package level and some tasks within each work-package. Tasks are generally described in the past tense using words such as ‘completed’ or ‘developed’, so although it does give a reasonable overview of when activities are due to be happening there is less of an appreciation of the actual activities taking place in each time period. There’s a space on the page for a description of the status and that can be used to flag up what has been completed, or any particular issues. For bigger projects several OPPPs might be used, maybe with a high-level overarching version.
To organise and track the tasks in the project we’re using Trello. This openly available tool lets you create a Board for your project, and then arrange your tasks (each one termed a ‘card’) into groupings. So we’ve got several Phases for the project and then To Do, Doing and Done lists of tasks. You can add people to the cards and send out emails to people, set deadlines etc. You can easily drag cards from one list to another, create new cards and share with the project team. We’re only using the open/free version not the Business Class version and it seems to work fine for us. Trello worked pretty well for our digital library development project, particularly in terms of focusing on which developments went into which software release. So it will be interesting to see how well it works on a project that is a bit more exploratory and research-based.
Looking at what work has already been done in this area is an important part of the project. So at an early stage we’re doing a literature review. That’s partly to be able to understand the context that we’re working in and to give credit (through citations) of ideas that have come from other work, but specifically to look at techniques people have been using to investigate the relationship between student success, retention and library use. We’re not expecting that there will be an exact study that matches up with our conditions (the lack of student book loans data for one thing), but the approaches other people have taken are important for us to understand. We’re also hoping to write up the work for publication, so keeping track of citations for other work is vital. To do that we’re using RefMe and have setup a shared folder for the members of the project team to add references they find. RefMe seems to be quite good at finding the full references from partial details, although there are a few we’re adding in manually. To help with retrieving the articles we’re adding in the local version of the URL so we can find the article again. The tool also allows you to add notes about the reference, which can be useful. RefMe has an enormous range of reference styles and can output in a range of formats to other tools such as Zotero, Mendeley, RefWorks or Endnotes for example.
To keep interested parties up-to-date with project activities we’re using a wordpress blog, for this project the blog is at www.open.ac.uk/blogs/LibraryData. We’re fortunate in that we’ve an institutional blog environment established using a locally hosted version of the wordpress software. Although it isn’t generally the latest version of the wordpress blog software, there’s little maintenance overhead, we can track usage through the Google Analytics plug-in, and it integrates in with our authentication system, so it does the job quite well. We’ve used blogs fairly consistently through our projects and they have the advantage of allowing the project team to get messages and updates out quickly, encourage some commenting and interaction, and allow both short update-type newsy items as well as some more in-depth reflective or detailed pieces. They can be a relatively informal communication channels, are easy for people to edit and update and there’s not much of an overhead to administration. Getting a header sorted out for the blog is often the thing that takes up a bit of time.
Other tools and tools for the next steps
The usual round of office tools and templates are being used for project documents, for project mandates and project initiation documents, through to documentation of Risks, Assumptions, Issues and Dependencies, Stakeholder plans and Communications plans. These are mainly in-house templates in MS Word or Excel. Having established the project with an initial set of tools, attention is now turning to approaches to manage the data and the statistics. How do we manage the large amount of data to be able to merge datasets, extract data, carry out analyses, develop and present visualisations? Where can we use technologies we’ve already got, or already have licences for, where might we need other tools?
I was intrigued to see a couple of pieces of evidence that the number of words used in scholarly searches was showing a steady increase. Firstly Anurag Acharya from Google Scholar in a presentation at ALPSP back in September entitled “What Happens When Your Library is Worldwide & All Articles Are Easy to Find” (on YouTube) mentions an increase in the average query length to 4-5 words, and continuing to grow. He also reported that they were seeing multiple concepts and ideas in their search queries. He also mentions that unlike general Google searches, Google Scholar searches are mostly unique queries.
So I was really interested to see the publication of a set of search data from Swinburne University of Technology in Australia up on Tableau Public. https://public.tableau.com/profile/justin.kelly#!/vizhome/SwinburneLibrary-Homepagesearchanalysis/SwinburneLibrary-Homepagesearchanalysis The data covers search terms entered into their library website homepage search box at http://www.swinburne.edu.au/library/ which pushes searches to Primo, which is the same approach that we’ve taken. Included amongst the searches and search volumes was a chart showing the number of words per search growing steadily from between 3 and 4 in 2007 to over 5 in 2015, exactly the same sort of growth being seen by Google Scholar.
Across that time period we’ve seen the rise of discovery systems and new relevancy ranking algorithms. Maybe there is now an increasing expectation that systems can cope with more complex queries, or is it that users have learnt that systems need a more precise query? I know from feedback from our own users that they dislike the huge number of results that modern discovery systems can give them, the product of the much larger underlying knowledge bases and perhaps also the result of more ‘sophisticated’ querying techniques. Maybe the increased number of search terms is user reaction and an attempt to get a more refined set of results, or just a smaller set of results.
It’s also interesting for me to think that with discovery systems libraries have been trying to move towards ‘Google’-like search systems – single, simple search boxes, with relevancy ranking that surfaces the potentially most useful results at the top. Because this is what users were telling us that they wanted. But Google have noticed that users didn’t like to get millions of results, so they increasingly seem to hide the ‘long-tail’ of results. So libraries and discovery systems might be one step behind again?
So it’s area for us to look at our search queries to see if we have a similar pattern either in the searches that go through the search box on the homepage of the library website, or from the searches that go into our Discovery system. We’ve just got access to Primo Analytics using Oracle Business Intelligence and one of the reports covers popular searches back to the start of 2015. So looking at some of the data and excluding searches that seem to be ISBN searches or single letter searches and then restricting it down to queries that have been seen more than fifty times (which may well introduce its own bias) gives the following pattern of words in search queries:
Just under 31,000 searches, with one word searches being the most common and then a relatively straightforward sequence reducing the longer the search query. But with one spike around 8 words and with an overall average word length of 2.4 words per query. A lot lower than the examples from Swinburne or Google Scholar. Is it because it is a smaller set or incomplete, or because it concentrates on the queries seen more than 50 times? Are less frequently seen queries likely to be longer by definition? Some areas to investigate further
Two interesting pieces of news came out yesterday with the sale of 3M library systems to Bibliotecha http://www.blibliotecha.com and then the news that Proquest were buying ExLibris. For an industry take on the latter news look at http://www.sr.ithaka.org/blog/what-are-the-larger-implications-of-proquests-acquisition-of-exlibris/
From the comments on twitter yesterday it was a big surprise to people, but it seems to make some sense. And it is a sector that has always gone through major shifts and consolidations. Library systems vendors always seem to change hands frequently. Have a look at Marshall Breeding’s graphic of the various LMS vendors over the years to see that change is pretty much a constant feature. http://librarytechnology.org/mergers/
There are some big crossovers in the product range, especially around discovery systems and the underlying knowledge bases. Building and maintaining those vast metadata indexes must be a significant undertaking and maybe we will see some consolidation. Primo and Summon fed from the same knowledge base in the future maybe?
Does it help with the conundrum of getting all the metadata in all the knowledge bases? Maybe it puts Proquest/ExLibris in a place where they have their own metadata to trade? But maybe it also opens up another competitive front.
It will be intersting to see what the medium term impact will be on plans and roadmaps. Will products start to merge, will there be less choice in the marketplace when libraries come round to chosing future systems?