Photograph of grass in sunlightOne of the areas we started to explore with our digital archive project for was web archiving.  The opportunity arose to start to capture course websites from our Moodle Virtual Learning environment from 2006 onwards.   We made use of the standard web archive format WARC and eventually settled on Wget as the tool to archive the websites from moodle, (we’d started with using Heritrix but discovered that it didn’t cope with our authentication processes).  As a proof of concept we included one website in our staff version of our digital archive (the downside of archiving course materials is that they are full of copyright materials) and made use of a local instance of the Wayback machine software from the Internet Archive.  [OpenWayback is the latest development].   So we’ve now archived several hundred module websites and will be starting to think about how we manage access to them and what people might want to do with them (beyond the obvious one of just looking at them to see what was in those old courses).

So I was interested to see a tweet and then a blog post about a tool called warcbase – described as ‘an open-source platform for managing web archives…’ but particularly because the blog post from Ian Milligan combined web archiving with something else that I’d remembered Tony Hirst talking and blogging about, IPython and Jupyter. It also reminded me of a session Tony ran in the library taking us through ipython and his ‘conversations with data’ approach.

The warcbase and jupyter approach takes the notebook method of keeping track of your explorations and scripting and applies it to the area of web archives to explore the web archive as a researcher might.  So it covers the sort of analytical work that we are starting to see with the UK Web Archive data (often written up on the UK Web Archive blog).   And it got me starting to wonder both about whether warcbase might be a useful technology to explore as a way of thinking about how we might develop a method of providing access to the VLE websites archive.  But it also made me think about what the implications might be of the skills that librarians (or data librarians) might need to have to facilitate the work of researchers who might want to run tools like jupyter across a web archive, and about the technology infrastructure that we might need to facilitate this type of research, and also about what the implications are for the permissions and access that researchers might need to explore the web archive.  A bit of an idle thought about what we might want to think about.

Plans are worthless, but planning is everything. Dwight D. Eisenhower

I’ve always been intrigued about the differences between ‘plans’ and ‘planning’ and was taken by this quote from President Dwight D. Eisenhower.  Talking to the National Defense Executive Reserve Conference in 1957 and talking about how when you are planning for an emergency it isn’t going to happen in the way you are planning, so you throw your plans out and start again.  But, critically, planning is vital, in Eisenhower’s own words “That is the reason it is so important to plan, to keep yourselves steeped in the character of the problem that you may one day be called upon to solve–or to help to solve.”  There’s a similar quote generally attributed to Winston Churchill (although I’ve not been able to find an actual source for it)   “Plans are of little importance, but planning is essential”

Bird flocks and sunsetMany of the examples of these sort of quotes seem to come from a military background, along the lines that no plan will survive contact with reality.  But the examples I think also hold true for any project or activity.  Our plans will need to adapt to fit the circumstances and will, and must, change.  Whereas a plan is a document that outlines what you want to do, it is based on the state of your knowledge at a particular time, often before you have started the activity.  It might have some elements based on experience of doing the same thing before, or doing a similar thing before, so you are undertaking some repeatable activity and will have a greater degree of certainty about how to do X or how long Y will take to do.  But that often isn’t the case.  So it’s a starting point, your best guess about the activity.  And you could think about a project as a journey, with the project plan as your itinerary.  You might set out with a set of times for this train or that bus, but you might find your train being delayed or taking a different route and so your plan changes.

So you may start with your destination, and a worked out plan about how to get there.  But, and this is where planning is important, some ideas about contingencies or options or alternative routes in case things don’t quite work out how your plan said they should.  And this is the essence of why planning is important in that it’s about the process of thinking about what you are going to do in the activity.  You can think about the circumstances, the environment and the potential alternatives or contingencies in the event that something unexpected happens.

For me, I’m becoming more convinced that there’s a relationship around project length and complexity and a window/level at which you can realistically plan in terms of level of detail and how far in advance you can go.  At a high level you can plan where you want to get to, what you want to achieve and maybe how you measure whether you’ve achieved what you want to – so, you could characterise that as the destination.  But when it comes to the detail of anything that involves any level of complexity, newness or innovation, then the window of being able to plan a detailed project plan (the itinery) starts have a shorter and shorter window of certainty.  A high-level plan is valuable, but expect that the detail will change.  But then shorter time periods of planning seem to be more useful – becoming much more akin to the agile approach.

So when you’re looking at your planned activity and resource at the start of the project and then comparing it with the actual resource and activity then often you’ll find there’s a gap.  They didn’t pan out how you expected at the start, well, they probably wouldn’t and maybe shouldn’t.  Part way into the project you know much more than when you started, as Donald Rumsfeld put it “Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones”

As you go through your project, those ‘unknown unknowns’ become known, even if at some stages and in some projects it’s akin to turning over stones to find more stones, and so on, but on your journey you build up a better picture and build better plans for the next cycle of activity.  (And if you really need to know the differences between Planned and Actuals you can use MS Project and can baseline your plan and then re-baseline it to track how the plan has changed over time).

So, we’re at the start of a new project and I thought it was a useful time to reflect on the range of tools we’re using in the early stages of the project for collaboration and project management.  These tools cover communication, project management, task management and bibliographic management.

Project Management
For small projects we’re using the One Page Project Plan, an excel template from This uses a single excel worksheet to cover tasks, progress, responsibility and accountability and also some confidence measures about how the project is progressing.  We’ve used this fairly consistently for two or three years for our small projects and people are pretty familiar with not only how to use them for projects but also how to read and interpret them.   You can only really get about 25-30 tasks onto the OPPP, so it will be used to track activities at a relatively high level although we can reflect both the work-package level and some tasks within each work-package.  Tasks are generally described in the past tense using words such as ‘completed’ or ‘developed’, so although it does give a reasonable overview of when activities are due to be happening there is less of an appreciation of the actual activities taking place in each time period.  There’s a space on the page for a description of the status and that can be used to flag up what has been completed, or any particular issues.   For bigger projects several OPPPs might be used, maybe with a high-level overarching version.

Task tracking
To organise and track the tasks in the project we’re using TrelloTrello screenshotThis openly available tool lets you create a Board for your project, and then arrange your tasks (each one termed a ‘card’)  into groupings.  So we’ve got several Phases for the project and then To Do, Doing and Done lists of tasks.  You can add people to the cards and send out emails to people, set deadlines etc.  You can easily drag cards from one list to another, create new cards and share with the project team.   We’re only using the open/free version not the Business Class version and it seems to work fine for us.  Trello worked pretty well for our digital library development project, particularly in terms of focusing on which developments went into which software release.  So it will be interesting to see how well it works on a project that is a bit more exploratory and research-based.

Bibliographic management
Looking at what work has already been done in this area is an important part of the project.  So at an early stage we’re doing a literature review.  That’s partly to be able to understand the context that we’re working in and to give credit (through citations) of ideas that have come from other work, but specifically to look at techniques people have been using to investigate the relationshipRefMe screenshot between student success, retention and library use.  We’re not expecting that there will be an exact study that matches up with our conditions (the lack of student book loans data for one thing), but the approaches other people have taken are important for us to understand.  We’re also hoping to write up the work for publication, so keeping track of  citations for other work is vital.  To do that we’re using RefMe and have setup a shared folder for the members of the project team to add references they find.  RefMe seems to be quite good at finding the full references from partial details, although there are a few we’re adding in manually.  To help with retrieving the articles we’re adding in the local version of the URL so we can find the article again.  The tool also allows you to add notes about the reference, which can be useful.  RefMe has an enormous range of reference styles and can output in a range of formats to other tools such as Zotero, Mendeley, RefWorks or Endnotes for example.

To keep interested parties up-to-date with project activities we’re using a wordpress blog, for this project the blog is at Data blog screenshotWe’re fortunate in that we’ve an institutional blog environment established using a locally hosted version of the wordpress software.  Although it isn’t generally the latest version of the wordpress blog software, there’s little maintenance overhead, we can track usage through the Google Analytics plug-in, and it integrates in with our authentication system, so it does the job quite well.  We’ve used blogs fairly consistently through our projects and they have the advantage of allowing the project team to get messages and updates out quickly, encourage some commenting and interaction, and allow both short update-type newsy items as well as some more in-depth reflective or detailed pieces.   They can be a relatively informal communication channels, are easy for people to edit and update and there’s not much of an overhead to administration.  Getting a header sorted out for the blog is often the thing that takes up a bit of time.

Other tools and tools for the next steps
The usual round of office tools and templates are being used for project documents, for project mandates and project initiation documents, through to documentation of Risks, Assumptions, Issues and Dependencies, Stakeholder plans and Communications plans.  These are mainly in-house templates in MS Word or Excel.  Having established the project with an initial set of tools, attention is now turning to approaches to manage the data and the statistics.  How do we manage the large amount of data to be able to merge datasets, extract data, carry out analyses, develop and present visualisations?  Where can we use technologies we’ve already got, or already have licences for, where might we need other tools?


I was intrigued to see a couple of pieces of evidence that the number of words used in scholarly searches was showing a steady increase.  Firstly Anurag Acharya from Google Scholar in a presentation at ALPSP back in September entitled “What Happens When Your Library is Worldwide & All Articles Are Easy to Find” (on YouTube) mentions an increase in the average query length to 4-5 words, and continuing to grow.  He also reported that they were seeing multiple concepts and ideas in their search queries.  He also mentions that unlike general Google searches, Google Scholar searches are mostly unique queries.

So I was really interested to see the publication of a set of search data from Swinburne University of Technology in Australia up on Tableau Public.!/vizhome/SwinburneLibrary-Homepagesearchanalysis/SwinburneLibrary-Homepagesearchanalysis The data covers search terms entered into their library website homepage search box at which pushes searches to Primo, which is the same approach that we’ve taken.  Included amongst the searches and search volumes was a chart showing the number of words per search growing steadily from between 3 and 4 in 2007 to over 5 in 2015, exactly the same sort of growth being seen by Google Scholar.

Across that time period we’ve seen the rise of discovery systems and new relevancy ranking algorithms.  Maybe there is now an increasing expectation that systems can cope with more complex queries, or is it that users have learnt that systems need a more precise query?  I know from feedback from our own users that they dislike the huge number of results that modern discovery systems can give them, the product of the much larger underlying knowledge bases and perhaps also the result of more ‘sophisticated’ querying techniques.  Maybe the increased number of search terms is user reaction and an attempt to get a more refined set of results, or just a smaller set of results.

It’s also interesting for me to think that with discovery systems libraries have been trying to move towards ‘Google’-like search systems – single, simple search boxes, with relevancy ranking that surfaces the potentially most useful results at the top. Because this is what users were telling us that they wanted.  But Google have noticed that users didn’t like to get millions of results, so they increasingly seem to hide the ‘long-tail’ of results.  So libraries and discovery systems might be one step behind again?

So it’s area for us to look at our search queries to see if we have a similar pattern either in the searches that go through the search box on the homepage of the library website, or from the searches that go into our Discovery system.  We’ve just got access to Primo Analytics using Oracle Business Intelligence and one of the reports covers popular searches back to the start of 2015.  So looking at some of the data and excluding searches that seem to be ISBN searches or single letter searches and then restricting it down to queries that have been seen more than fifty times (which may well introduce its own bias) gives the following pattern of words in search queries:

Search query length - OU Primo Jan - Oct 2015 - queries seen more than 50 timesJust under 31,000 searches, with one word searches being the most common and then a relatively straightforward sequence reducing the longer the search query.  But with one spike around 8 words and with an overall average word length of 2.4 words per query.  A lot lower than the examples from Swinburne or Google Scholar.  Is it because it is a smaller set or incomplete, or because it concentrates on the queries seen more than 50 times?  Are less frequently seen queries likely to be longer by definition?  Some areas to investigate further


Two interesting pieces of news came out yesterday with the sale of 3M library systems to Bibliotecha and then the news that Proquest were buying ExLibris.  For an industry take on the latter news look at

From the comments on twitter yesterday it was a big surprise to people, but it seems to make some sense.  And it is a sector that has always gone through major shifts and consolidations.  Library systems vendors always seem to change hands frequently.  Have a look at Marshall Breeding’s graphic of the various LMS vendors over the years to see that change is pretty much a constant feature.

There are some big crossovers in the product range, especially around discovery systems and the underlying knowledge bases.  Building and maintaining those vast metadata indexes must be a significant undertaking and maybe we will see some consolidation.  Primo and Summon fed from the same knowledge base in the future maybe?

Does it help with the conundrum of getting all the metadata in all the knowledge bases?  Maybe it puts Proquest/ExLibris in a place where they have their own metadata to trade?  But maybe it also opens up another competitive front.

It will be intersting to see what the medium term impact will be on plans and roadmaps.  Will products start to merge, will there be less choice in the marketplace when libraries come round to chosing future systems?



A fascinating couple of articles over the last few days around what is happening with ebook sales (from the US).  A couple of articles from the Stratechery site (via @lorcanD and @aarontay) Disconfirming ebooks and Are ebooks declining, or just the publishers.  Firstly referring to an article in the NY Times reporting on ebook sales plateau’ing, but then a more detailed piece of work from Author Earnings analysing more data.  The latter draws the conclusion that it was less a case of ebook sales plateauing but more a case that the market share from the big publishers was declining (and postulating that price increases might play a part).  Overall the research seems to show growth in independent and self-publishing but what looks like fairly low levels of growth overall.  The figures mostly seem to be about market share rather than hard and fast sales per se.  But interesting nonetheless to see how market share is moving away from ‘traditional’ print publishers.

The Stratechery articles are particularly interesting around the way that ebooks fit with the disruptive model of new digital innovation challenging traditional industries, what is termed here ‘Aggregation theory‘  [As an aside it’s interesting from the Author Earnings article to note that many of the new ebooks from independent or self-publishers don’t have ISBNs.  What does that imply for the longer term tracking of this type of material?    Already I suspect that they are hard to acquire for libraries and just don’t get surfaced in the library acquisitions sphere. Does it mean that these titles are likely to become much more ephemeral?]

The conclusion in the second Stratechery article I find particularly interesting, that essentially ebooks aren’t revolutionising the publishing industry in terms of the form they take.  They are simply a digital form of the printed item.  Often they add little extra by being in digital form, maybe they are easier to acquire and store, but often in price terms they aren’t much cheaper than the printed version.  Amazon Kindle does offer some extra features but I’ve never been sure how much they are taken up by readers. Unlike music you aren’t seeing books being disaggregated into component parts or chapters (although it’s a bit ironic considering that some of Charles Dickens’ early works, such as The Pickwick Papers, were published in installments, as part works).  But I’d contend that the album in music isn’t quite the same as a novel for example.  Music albums seem like convenient packaging/price? of a collection of music tracks (possibly with the exception of ‘concept’ albums?) for a physical format, whereas most readers wouldn’t want to buy their novels in parts.  There’s probably more of a correlation between albums/tracks and journals/articles – in that tracks/articles lend themselves in a digital world to being the lowest level and a consumable package of material.

But I can’t help but wonder why audiobooks don’t seem to have disrupted the industry either.  Audible are offering audiobooks in a similar way to Netflix but aren’t changing the book industry in the way the TV and movie industry are being changed.  So that implies to me that there’s something beyond the current ‘book’ offering (or that the ‘book’ actually is a much more consumable, durable package of content than other media).   Does a digital ‘book’ have to be something quite different that draws on the advantage of being digital – linking to or incoporating maps, images, videos or sound, or some other form of social interaction that could never be incorporated in a physical form?   Or are disaggregated books essentially what a blog is (modularization as suggested on stratechery)?  Is the hybrid digital book the game-changer?  [there are already examples of extra material being published online to support novels – see Mark Watson’s Hotel Alpha stories building on his novel Hotel Alpha, for example.]   You could liken online retailers as disrupting the book sales industry as a first step but we’re perhaps only in the early stages of seeing how Amazon will ultimately disrupt the publishing industry.  Perhaps the data from Author Earnings report points to the signs of the changes in ebook publishers.

data.path Ryoji.Ikeda - 3 by r2hox

data.path Ryoji.Ikeda – 3 by r2hox

One of the pieces of work we’re just starting off in the team this year is to do some in-depth work on library data.  In the past we’ve looked at activity data and how it can be used for personalised services (e.g. to build recommendations in the RISE project or more recently to support the OpenTree system), but in the last year we’ve been turning our attention to what the data can start to tell us about library use.

There have been a couple of activities that we’ve undertaken so far.  We’ve provided some data to an institutional Learning Analytics project on the breakdown of library use of online resources for a dozen or so target modules.  We’ve been able to take data from the EZproxy logfiles, and show the breakdown by student ID, by week and by resource over the nine-month life of the different modules.  That has put library data alongside other data such as use of the Virtual Learning Environment and allowed module teams to  look at how library use might relate to the other data.

Pattern of week by week library use of eresources - first level science course

Pattern of week by week library use of eresources – first level science course

A colleague has also been able to make use of some data combining library use and satisfaction survey data for a small number of modules, to shed a little light on whether satisfied students were making more use of the library than unsatisfied ones (obviously not a causal relationship – but initial indications seem to be that for some modules there does seem to be a pattern there).

Library Analytics roadmap
But these have been really early exploratory steps, so during last year we started to plan out a Library Analytics Roadmap to scope out the range of work we need to do.  This covers not just data analysis, but also some infrastructural developments to help with improving access to data and some effort to build skills in the library.  It is backed up with engagement with our institutional learning analytics projects and some work to articulate a strategy around library analytics.  The idea being that the roadmap activities will help us change how we approach data, so we have the necessary skills and processes to be able to provide evidence of how library use relates to vital aspects such as student retention and achievement.

Library data project
We’re working on a definition of Library analytics as being about:

Using data about student engagement with library services and content to help institutions and students understand and improve library services to learners

Part of the roadmap activity this year is to start to carry out a more systematic investigation into library data, to match it against student achievement and retention data.  The aim is to build an evidence base of case studies, based on quantitative data and some qualitative work we hope to do.  Ideally we’d like to be able to follow the paths mapped out by the likes of Minnesota, Wollongong and Huddersfield in their various projects and demonstrate that there is a correlation between library use, student success and retention.

Challenges to address
We know that we’re going to need more data analysis skills, and some expertise from a statistician.  We also have some challenges because of the nature of our institution.  We won’t have library management system book loans, or details of visits to the library, we will mainly have to concentrate on use of online resources.  So in some ways that simplifies things.  But our model of study also throws up some challenges.  With a traditional campus institution students study a degree over three or four years.  There is a cohort of students that follow through year 1, 2, 3 etc and at the end of that period they do their exams and get their degree classification.  So it is relatively straight-forward to see retention as being about students that return in year 2 and year 3, or don’t drop-out during the year, and to see success measured as their final degree classification.  But with part-time distance learning, where although students sign up to a qualification, they still follow a pattern of modules and many will take longer than six years to complete, often with one of more ‘breaks’ in study, following a cohort across modules might be difficult.  So we might have to concentrate on analysis at the ‘module’ level… but then that raises another question for us.  Our students could be studying more than one module at a time so how do you easily know whether their library use relates to module A or module B?  Lots of things to think about as we get into the detail.

OU Digital Archive home pageThe digital archive site that we’ve been working away on for a while now is finally public.  It is being given a very low-key soft launch to give time for more testing and checking to make sure that the features work OK for users, but as it has now been tweeted about, is linked from our main library website and findable on Google, then I can finally write a short piece about it.

The site has gone live with a mix of images, some videos about the university and a small collection of video clips from the first science module in the 1970s.  Accompanying the images and videos are a couple of sub-sites we’ve called Exhbitions. To start with there are two, one covering the teaching of Shakespeare and the other giving a potted history of the university.  The exhibitions are designed to give a bit more context around some of the material in the collection.

The small collection of 160 historical images from the history of the university include people involved in the development of the university or significant events such as the first graduation ceremony, as well as a selection of images about the construction of the campus.   The latter is slightly odd maybe for a distance learning institution, with a campus that most students may never see, but maybe that makes the changes to the physical enviroment of interest to students and the general viewer nonetheless.

The selection of videos include a collection of thirty programmes about the university mostly from the 1970s and 1980s and mainly from a magazine-style series called Open Forum, giving students a bit of an insight into the life of the university.  It includes sections from various University officials, but also student experiences, Summer schools and the like.  Some of the videos cover events such as royal visits and material about the history of the university.

Less obvious to the casual browser is the inclusion of a large collection of metadata about university courses.  This metadata profile forms a skeleton or scaffolding that is used to hang the bits of digitised course materials together and relate them to their parent course/module.  So it gives a way of displaying the Module presentation datesdifferent types of material included in a module together as well as giving information about the module, its subjects and when it ran.  At the moment there are only a few digitised samples hanging on the underlying bare bones.

To find the metadata go to the View All tab, make sure the ‘Available online’  button isn’t selected and choose ‘Module overview’ from Content Type, and it’s possible to browse through some details of the university’s old modules, seeing some information about the module, when they were run.  You can also follow through to the linked data repository at e.g. Underpinning this aspect of the site is a semantic web RDF triplestore.

Public and staff sites
One of the challenges for the digital archive is that it is essentially two different sites under the skin.  A staff version of the site has been available internally for over a year and lets staff login to see a broader range of material, particularly from old university course materials.  So staff can access some sound recordings as well as a small number of digitised books, and access a larger collection of videos, although at this stage it’s still a fairly small proportion of the overall archive.  But more will be added over time as well as hopefully some of the several hundred module websites that have been archived over the past three years.

Intellectual Property
Unlike many digital archives all of the content is relatively recent, i.e. less than fifty years old.  And that gives a different set of challenges as there is a lot of content that would need to have Intellectual Property rights cleared before it could be made openly available.  So there are a small number of clips but at the moment limited amounts of course materials that have been able to be made open.  So one of the challenges will be to find ways to fund making more material open, both in terms of the effort needed to digitise and check material and the cost of payments to any rights holders.

The digital archive can be found at

We’ve been running Primo as our new Library Search discovery system since the end of April so it’s been ‘live’ for just over four months.  Although it’s been a quieter time of year over the summer I thought it would be interesting to start to see what the analytics are saying about how Library Search is being used.

Internal click-throughs
Some analytics are provided by the supplier in the form of click-through statistics and there are some interesting figures that come out of those.  The majority of searches are ‘Basic searches’, some 85%.  Only about 11% of searches use Advanced search.  Advanced search isn’t offered against the Library Search box embedded into the home page of the library website but is offered next to the search box on the results page and on any subsequent search.  It’s probably slightly less than I might have expected as it seemed to be fairly frequently mentioned as being used regularly on our previous search tool.

About 17% of searches lead to users refining their search using the facets.  Refining the search using facets is something we are encouraging users to do, so that’s a figure we might want to see going up.  Interestingly only 13% navigated to the next page in a set of search results using the forward arrow, suggesting that users overwhelmingly expect to see what they want on the first page of results. (I’ve a slight suspicion about this figure as the interface presents links to pages 2-5 as well as the arrow – which goes to pages 6 onwards –  and I wonder if pages 2-5 are taken into account in the click-through figure).

Very few searches (0.5% of searches) led users to use the bX recommendations, despite this being in a prominent place on the page.  The ‘Did you mean’ prompt also seemed to have been used in 1% of searches.  The bookshelf feature ‘add to e-shelf’is used in about 2% of searches.

Web analytics
Browsers used pie chartLooking at web analytics shows that Chrome is the most popular browser, followed by Internet Exploer, Safari, and Firefox.

75% of traffic comes from Windows computers with 15% from Macintoshes.  There’s a similar amount of traffic from tablets to what we see on our main library website, with tablet traffic running at about 6.6% but mobile traffic is a bit lower at just under 4%.

Overall impressions
Devices using library search seem pretty much in line with traffic to other library websites.  There’s less mobile phone use but possibly that is because Primo isn’t particularly well-optimised for mobile devices and also maybe something to test with users whether they are all that interested in searching library discovery systems through mobile phones.

I’m not so surprised that basic search is used much more than advanced search.  It matches the expectations from the student research of a ‘google-like’ simple search box.  The data seems to suggest that users expect to find results that are relevant on page one and not go much further, something again to test with users ‘Are they getting what they want’.  Perhaps I’m not too surprised that the ‘recommender’ suggestions are not being used but it implies that having them at the top of the page might be taking up important space that could be used for something more useful to users.  Some interesting pointers about things to follow up in research and testing with users.


browzine magazine shelfWe’ve started using BrowZine ( as a different way of offering access to online journals.  Up until recently there were iOS and Android app versions but they have now been joined by a desktop version.

BrowZine’s interesting as it tries to replicate the experience of browsing recent copies of journals in a physical library.  It links into the library authentication system and is fed with a list of library holdings.  There are also some open access materials in the system.

You can browse for journals by subject or search for specific journals and then view the table of contents for each journal issue and link straight through to the full-text of the articles in the journals.  In the app versions you can add journal titles to your personal bookshelf (a feature promised for the desktop version later this year) and also see when new articles have been added to your chosen journals (shown with the standard red circle against the journal on the iOS version).

A useful tool if there are a selection of journals that you need to keep up to date with.  Certainly the ease with which you can connect with the full-text contrasts markedly with some of the hoops that we seem to expect users to cope with in some other library systems.

Twitter posts



November 2015
« Oct    

Creative Commons License


Get every new post delivered to your Inbox.

Join 47 other followers