You are currently browsing the category archive for the ‘Reflections’ category.

One of the first Bird flocks and sunsetprojects I worked on at the OU was a Jisc-funded project called Telstar. Telstar built a reference management tool, called MyReferences, integrating RefWorks into a Moodle Virtual Learning Environment (VLE).  Well, that MyReferences tool shortly reaches, what the software people call ‘End-of-Life’, and the website world like to refer to as ‘Sunsetting’, in other words, MyReferences is closing down later this month.  So it seemed like a good time to reflect on some of the things I’ve learnt from that piece of work.

In a lot of ways several things that Telstar and MyReferences did have now become commonplace and routine.  References were stored remotely in the RefWorks platform (we’d now describe that as cloud-hosted) and that’s almost become a default way of operating whether you think of email with Outlook365 or library management systems such as ExLibris Alma.    Integration with moodle was achieved using an API, again, that’s now a standard approach.  But both seemed quite a new departure in 2010.

I remember it being a complex project in lots of ways, creating integrations not just between RefWorks and Moodle but also making use of some of the OpenURL capabilities of SFX.  It was also quite ambitious in aiming to provide solutions applicable to both students and staff.  Remit (the Reference Management Integration Toolkit) gives a good indication of some of the complexities not just in systems but also in institutional and reference management processes.   The project not only ran a couple of successful Innovations in Reference Management events but led to the setup of a JiscMail reading list systems mailing list.

Complexity is the main word that comes to mind when thinking about some of the detailed work that went into mapping reference management styles between OU Harvard in RefWorks and MyReferences to ensure that students could get a simplified reference management system in MyReferences without having to plunge straight into the complexity of full-blown RefWorks.  It really flagged for me the implications of not having standard referencing styles across an institution but also the impact of not adopting a standard style already well supported but of designing your own custom institutional style.  One of the drawbacks of using RefWorks as a resource list system was that each reference in each folder was a separate entity meaning that any changes in a resource (name for example) had to be updated in every list/folder.  So it taught us quite a bit about what we ideally wanted from a resource list management/link management system.

Reference management has changed massively in the past few years with web-based tools such as Zotero, Refme and Mendeley becoming more common, and Microsoft Office providing support for reference management.  So the need to provide institutional systems maybe has passed when so many are available on the web.   And I think it reflects how any tool or product has a lifecycle of development, adoption, use and retirement.  Maybe that cycle is now much shorter than it would have been in the past.

 

 

SunsetIn the early usability tests we ran for the discovery system we implemented earlier in the year one of the aspects we looked at were the search facets.   Included amongst the facets is a feature to let users limit their search by a date range.  So that sounds reasonably straight-forward, filter your results by the publication date of the resource, narrowing your results down by putting in a range of dates.  But one thing that emerged during the testing is that there’s a big assumption underlying this concept.  During the testing a user tried to use the date range to restrict results to journals for the current year and was a little baffled why the search system didn’t work as they expected.  Their expectation was that by putting in 2015 it would show them journals in that subject where we had issues for the current year.  But the system didn’t know that issues that were continuing and therefore had a date range that was open-ended were available for 2015 as the metadata didn’t include the current year, just a start date for the subscription period.  So consequently the system didn’t ‘know’ that the journal was available for the current year.  And that exposed for me the gulf that exists between user and library understanding and how our metadata and systems don’t seem to match user expectations.  So that usability testing session came to mind when reading the following blog post about linked data.

I would really like my software to tell the user if we have this specific article in a bound print volume of the Journal of Doing Things, exactly which of our location(s) that bound volume is located at, and if it’s currently checked out (from the limited collections, such as off-site storage, we allow bound journal checkout).

My software can’t answer this question, because our records are insufficient. Why? Not all of our bound volumes are recorded at all, because when we transitioned to a new ILS over a decade ago, bound volume item records somehow didn’t make it. Even for bound volumes we have — or for summary of holdings information on bib/copy records — the holdings information (what volumes/issues are contained) are entered in one big string by human catalogers. This results in output that is understandable to a human reading it (at least one who can figure out what “v.251(1984:Jan./June)-v.255:no.8(1986)”  means). But while the information is theoretically input according to cataloging standards — changes in practice over the years, varying practice between libraries, human variation and error, lack of validation from the ILS to enforce the standards, and lack of clear guidance from standards in some areas, mean that the information is not recorded in a way that software can clearly and unambiguously understand it.  From https://bibwild.wordpress.com/2015/11/23/linked-data-caution/ the Bibliographic Wilderness blog

Processes that worked for library catalogues or librarians i.e. in this case the description v.251(1984:Jan./June)-v.255:no.8(1986) need translating for a non-librarian or a computer to understand what they mean.

It’s a good and interesting blog post and raises some important questions about why, despite the seemingly large number of identifiers in use in the library world (or maybe because) it is so difficult to pull together metadata and descriptions of material to consolidate versions together.   It’s an issue that causes issues across a range of work we try to do, from discovery systems, where we end up trying to normalise data from different systems to reduce the number of what seem to users to be duplicate entries to work around usage data, where trying to consolidate usage data of a particular article or journal becomes impossible where versions of that article are available from different providers, or from institutional repositories or from different URLs.

Plans are worthless, but planning is everything. Dwight D. Eisenhower

I’ve always been intrigued about the differences between ‘plans’ and ‘planning’ and was taken by this quote from President Dwight D. Eisenhower.  Talking to the National Defense Executive Reserve Conference in 1957 and talking about how when you are planning for an emergency it isn’t going to happen in the way you are planning, so you throw your plans out and start again.  But, critically, planning is vital, in Eisenhower’s own words “That is the reason it is so important to plan, to keep yourselves steeped in the character of the problem that you may one day be called upon to solve–or to help to solve.”  There’s a similar quote generally attributed to Winston Churchill (although I’ve not been able to find an actual source for it)   “Plans are of little importance, but planning is essential”

Bird flocks and sunsetMany of the examples of these sort of quotes seem to come from a military background, along the lines that no plan will survive contact with reality.  But the examples I think also hold true for any project or activity.  Our plans will need to adapt to fit the circumstances and will, and must, change.  Whereas a plan is a document that outlines what you want to do, it is based on the state of your knowledge at a particular time, often before you have started the activity.  It might have some elements based on experience of doing the same thing before, or doing a similar thing before, so you are undertaking some repeatable activity and will have a greater degree of certainty about how to do X or how long Y will take to do.  But that often isn’t the case.  So it’s a starting point, your best guess about the activity.  And you could think about a project as a journey, with the project plan as your itinerary.  You might set out with a set of times for this train or that bus, but you might find your train being delayed or taking a different route and so your plan changes.

So you may start with your destination, and a worked out plan about how to get there.  But, and this is where planning is important, some ideas about contingencies or options or alternative routes in case things don’t quite work out how your plan said they should.  And this is the essence of why planning is important in that it’s about the process of thinking about what you are going to do in the activity.  You can think about the circumstances, the environment and the potential alternatives or contingencies in the event that something unexpected happens.

For me, I’m becoming more convinced that there’s a relationship around project length and complexity and a window/level at which you can realistically plan in terms of level of detail and how far in advance you can go.  At a high level you can plan where you want to get to, what you want to achieve and maybe how you measure whether you’ve achieved what you want to – so, you could characterise that as the destination.  But when it comes to the detail of anything that involves any level of complexity, newness or innovation, then the window of being able to plan a detailed project plan (the itinery) starts have a shorter and shorter window of certainty.  A high-level plan is valuable, but expect that the detail will change.  But then shorter time periods of planning seem to be more useful – becoming much more akin to the agile approach.

So when you’re looking at your planned activity and resource at the start of the project and then comparing it with the actual resource and activity then often you’ll find there’s a gap.  They didn’t pan out how you expected at the start, well, they probably wouldn’t and maybe shouldn’t.  Part way into the project you know much more than when you started, as Donald Rumsfeld put it “Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones”

As you go through your project, those ‘unknown unknowns’ become known, even if at some stages and in some projects it’s akin to turning over stones to find more stones, and so on, but on your journey you build up a better picture and build better plans for the next cycle of activity.  (And if you really need to know the differences between Planned and Actuals you can use MS Project and can baseline your plan and then re-baseline it to track how the plan has changed over time).

So we’re slowly emerging from our recent LMS project and a bit of time to stop and reflect, partly at least to get project documentation for lessons learned and suchlike written up and the project closed down.  We’ve moved from Voyager, SFX and EBSCO Discovery across to Alma and Primo.  We went from a project kick off meeting towards the end of September 2014 to being live on Alma at the end of January 2015 and live on Primo at the end of April.

So time for a few reflections about some of the things to think about from this implementation.  I’d worked out the other day that it has been the fifth LMS procurement/implementation process I’ve been involved with, and doing different roles and similar roles in each of them.  For this one I started as part of the project team but ended leading the implementation stage.

Reflection One
Tidy up your data before you start your implementation.  Particularly your bibliographic data but if you can other data too.  You might not be able to do so if you are on an older system as you might not have the tools to sort out some of the issues.  But the less rubbish you can take over to your nice new system the less sorting out you’ve got to do on the new system.  And when you are testing your initial conversion too much rubbish makes it harder to see the ‘wood for the trees’, in other words work out what are problems that you need to fix by changing the way the data converts and what is just a consequence of poor data. With bibliographic data the game has changed, you are now trying to match your data with a massive bibliographic knowledge base.

Reflection Two
It might be ideal to plan to go live with both LMS and Discovery at the same time but it’s hard to do.  The two streams often need the same technical resources at the same time.  Timescales are tight to get everything sorted in time.  We decided that we needed to give users more notice of the changes to the Discovery system and make sure there was a changeover period by running in parallel.

Reflection Three
You can move quickly.  We took about four months from the startup meeting to being live on Alma but it means that you have a very compressed schedule.  Suppliers have a well-rehearsed approach and project plan but it’s designed as a generic approach.  There’s some flexibility but it’s deliberately a generic tried-and-tested approach.  You have to be prepared to be flexible and push things through as quickly as possible.  There isn’t much time for lots of consultation about decisions, which leads to…

Reflection Four
As much as possible, get your decisions about changes in policies and new approaches made before you start.  Or at least make sure that the people on the project team can get decisions made quickly (or make them themselves) and can identify from the large numbers of documents, guidance and spreadsheets to work through, what the key decisions you need to make will be.

Reflection Five
Get the experts who know about each of the elements of your LMS/Discovery systems involved with the project team.  There’s a balance between having too many and too few people on your project team but you need people who know about your policies, processes, practices and workflows, your metadata (and about metadata in general in a lot of detail to configure normalisation, FRBR’ization etc etc), who know about your technology and how to configure authentication and CSS.  Your project team is vital to your chances of delivering.

Reflection Six
Think about your workflows and document them.  Reflect on them as you go through your training.  LMS workflows have some flexibility but you still end up going through the workflows used by the system.  Whatever workflows you start with you will no doubt end up changing or modifying them once you are live.

Reflection Seven
Training.  Documentation is good.  Training videos are useful and have the advantage of being able to be used whenever people have time.  But you still need a blended approach, staff can’t watch hours of videos, and you need to give people training about how your policies and practices will be implemented in the new LMS.  So be prepared to run face to face sessions for staff.

Reflection Eight
Regular software updates.  Alma gets monthly software updates.  Moving from a system that was relatively static we wondered about how disruptive it would be.  Advice from other Libraries was that it wasn’t a problem.  And it doesn’t seem to be.  There are new updated user guides and help in the system and the changes happen over the weekend when we aren’t using the LMS.

Reflection Nine
It’s Software as a Service so it’s all different.  I think we were used to Discovery being provided this way so that’s less of a change.  The LMS was run previously by our corporate IT department so in some senses it’s just moved from one provider to another.  We’ve a bit less control and flexibility to do stuff with it but OK, and on the other hand we’ve more powerful tools and APIs.

Refelection Ten
Analytics is good and a powerful tool but build up your knowledge and expertise to get the best out of it.  We’ve reduced our reports and do a smaller number than we’d thought we need.  Scheduled reports and widgets and dashboards are really useful and we’re pretty much scratching the surface of what we can do.  Access to the community reports that others have done is pretty useful especially when you are starting.

Refelection Eleven
Contacts with other users are really useful.  Sessions talking to other customers, User Group meetings and the mailing lists have all been really valuable.  An active user community is a vital asset for products not just the open source ones.

and finally, Reflection Twelve
We ran a separate strand to do some user research with students into what users wanted from library search.  This was really invaluable as it gave us evidence to help in the procurement stage, but particularly it helped shape the decisions made about how to setup Primo.  We’ve been able to say: this is what library users want and we have the evidence about it.  And that has been really important in being able to challenge thinking based on what us librarians think users want (or what we think they should want).

So, twelve reflections about the last few months.  Interesting, enlightening, enjoyable, frustrating at times, and tiring.  But worthwhile, achievable and something that is allowing us to move away from a set of mainly legacy systems, not well-integrated, not so easy to manage to a set of systems that are better integrated, have better tools and perhaps as important have a better platform to build from.

Photograph of office buildings at Holborn Circus

Holborn Circus – I was struck by the different angles of the buildings

Themes

For me two big themes came to mind after this year’s Future of Technology in Education Conference (FOTE). Firstly, around creativity, innovation and co-creation; and secondly about how fundamental data and analytics is becoming.

Creativity, innovation and co-creation

Several of the speakers talked about innovation and creativity.  Dave Coplin talked of the value of Minecraft and Project Spark and the need to create space for creativity, while Bethany Koby showed us examples of some of the maker kits ‘Technology Will Save Us’ are creating.

Others talked of ‘flipping the classroom’ and learning from students as well as co-creation and it was interesting in the Tech start-up pitchfest that a lot of the ideas were student-created tools, some working in the area of collaborative learning.

Data and analytics

The second big trend for me was about analytics and data.  I was particularly interested to see how many of the tools and apps being pitched at the conference had an underlying layer of analytics.  Evaloop which was working in the area of student feedback, Knodium – a space for student collaboration, Reframed.tv – offering interaction and sharing tools for video content, Unitu – an issues tracking tool and MyCQs – a learning tool, all seemed to make extensive use of data and analytics, while Fluency included teaching analytics skills.  It is interesting to see how many app developers have learnt the lessons of Amazon and Google of the value of the underlying data.

Final thoughts and what didn’t come up at the conference

I didn’t hear the acronymn MOOC at all – slightly surprising as it was certainly a big theme of last year’s conference.  Has the MOOC bubble passed? or is it just embedded into the mainstream of education?  Similarly Learning Analytics (as a specific theme).  Certainly analytics and data was mentioned (as I’ve noted above) but of Learning Analytics – not a mention, maybe it’s embedded into HE practice now?

Final thoughts on FOTE.  A different focus to previous years but still with some really good sessions and the usual parallel social media back-channels full of interesting conversations. Given that most people arrived with at least one mobile device, power sockets to recharge them were in rather short supply.

Friday in early October, so it must be time for ULCCs Future of Technology in Education at Senate House in London. I’ve been fortunate to be able to go several times, but it is always a scramble to get one of the scarce tickets when they are released on Eventbrite during August. They often seem to get released when I am away on holiday so I’ve sat in a variety of places and booked a ticket for FOTE.

The conference usually gives a good insight into the preoccupations of educational technologists at a particular time. In some ways I know I tend to use it as a bit of a checklist as much as being a conference that surfaces completely new things. So it is a case of looking at the trends and thinking about how that is relevant to us, what are we doing in that area, are there other things we need to be thinking about.

Current preoccupations in this area are certainly around practicalities, ethics etc of learning analytics. Interesting to see that Arkivum are here with a stand, that recognises a current preoccupation around Research Data Management.

I know I haven’t been blogging much since the Summer, mainly due to too many other things going on, a new library management system and discovery system implementation primarily. So I want to find a bit of time to reflect on FOTE and our new LMS.

IMG_0024.JPG

Picture of flowerOK, so it’s the time of year to reflect back on the last year and look forward to the new year.

Blogging
I’ve definitely blogged less (24 posts in 2013 compared with 37 in 2012 and 50 in 2011), [mind you the ‘death of blogging’ has been announced, and here and there seem to be fewer library bloggers than in the past – so maybe blogging less is just reflecting a general trend].  Comments about blogging are suggesting that tumblr, twitter or snapchat are maybe taking people’s attention (both bloggers and readers) away from blogs.  But I’m not ‘publishing’ through other channels particularly, other than occasional tweets, so that isn’t the reason for me to blog less.  There has been a lot going on but that’s probably not greatly different from previous years.  I think I’ve probably been to less conferences and seminars, particularly internal seminars, so that has been one area where I’ve not had as much to blog about.

To blog about something or not to blog about it
I’ve been more conscious of not blogging about some things that in previous years I probably would have blogged about.  I don’t think I blogged about the Future of Technology in Education conference this year, although I have done in the past.  Not particularly because it wasn’t interesting because it was, but perhaps a sense of I’ve blogged about it before and might just be repeating myself.   With the exception of posts about website search and activity data I’ve not blogged so much about some of the work that I’ve been doing.  So I’ve blogged very little about the digital library work although it (and the STELLAR project) were a big part of some of the interesting stuff that has been going on.

Thinking about the year ahead
I’ve never been someone that sets out predictions or new year resolutions.  I’ve never been convinced that you can actually predict (and plan) too far ahead in detail without too many variables fundamentally changing those plans.  There’s a quote attributed to various people along the lines that ‘no plan survives contact with the enemy’ and I’d agree with that sentiment.  However much we plan we are always working with an imperfect view of the world.  Circumstances change and priorities vary and you have to adapt to that.   Thinking back to FOTE 2013 it was certainly interesting to hear BT’s futureologist Nicola Millard describe her main interest as being the near future and of being more a ‘soon-ologist’ than a futureologist.

What interests (intrigues perhaps) me more is less around planning but more around ‘shaping’ a future, so more change management than project management I suppose.  But I think it is more than that, how do those people who carve out a new ‘reality’ go about making that change happen.  Maybe it is about realising a ‘vision’ but assembling a ‘vision’ is very much the easy part of the process.  Getting buy-in to a vision does seem to be something that we struggle with in a library setting.

On with 2014
Change management is high on the list for this year.  We’ve done a certain amount of the ‘visioning’ to get buy-in to funding a change project.  So this year we’ve work to do to procure a complete suite of new library systems (the first time I think here for 12 years or so), in a project called ‘Library Futures’ that also includes some research into student needs from library search and the construction of a ‘digital skills passport’.  I’ve also got continuing work on digital libraries/archives as we move that work from development to live, alongside work with activity data, our library website and particularly work with integrating library stuff much more into a better student experience.  So hopefully some interesting things to blog about.  And hopefully a few new pictures to brighten up the blog (starting with a nice flower picture from Craster in the summer).

One of the different elements of working in an academic library as opposed to a public library is that writing an article to be published in a proper ‘academic’ journal becomes more likely.  It becomes something you might do whereas in the past it wouldn’t have been something I would have particularly considered.  Articles for ‘trade’ publications maybe, possibily in one of the library technology journals perhaps.  But not something that was particularly high up on the list of things to do.

As an aside I’ve felt that the importance of journals (or serials) is one of the biggest differences between public and academic libraries.  The whole journal infrastructure (both technical and publishing aspects) weren’t ever particularly prominent in the agenda of a public library.   It’s interesting though to find that there’s now a pilot to provide public walk-in access to academic journals through public libraries.   I will be fascinated to see how that pilot turns out as my experience in public libraries was that we rarely had any demand for widespread academic journal access over and above the odd inter-library loan article request.  So it will be interesting to see what demand they see and how it might be promoted to build up an audience for this material.  My suspicion has long been that the reason for the lack of demand was that library users simply didn’t have an expectation that it might be possible.

Going through the publication process for an article (even as a co-author) has been a useful experience in helping me understand more about the publishing process that academics have to go through as part of their professional life.  Faced with the practical decisions about whether to go open access and pay an article processing charge (APC), or publish in a subscription journal (a choice between author pays or customer pays) throws a sharp focus on the practical implications of Green or Gold and Open Access.  Getting a copy of an early version of the document into the institutional repository was another task that had to be included.

It’s been interesting to see how the focus on publication swiftly turns to a list of things to do to promote the article, such as setup your identity on Google Scholar and link your publication to it (fascinating for me in that it showed up a report for a project as well as a long forgotten dissertation listed in Worldcat).   But also things to do like establishing an Orcid ID (that put me in mind of LinkedIn for academics for some reason) and then linking your publication to it.  Although the importance of citations was something I’ve been aware of (and I work at one of a few UK academic libraries with a bibliometrician post), it certainly does make you realise how critical it is for an academic’s reputation and how their career depends on their papers being cited when you realise that there are a list of things to do to promote your article.

To Birmingham today for the second meeting of the Jisc LAMP (library analytics and metrics project) community advisory and planning group. This is a short Jisc-managed project that is working to build a prototype dashboard tool that should allow benchmarking and statistical significance tests on a range of library analytics data.

The LAMP project blog at http://jisclamp.mimas.ac.uk is a good place to start to get up to speed with the work that LAMP is doing and I’m sure that there will be an update on the blog soon to cover some of the things that we discussed during the day.

One of the things that I always find useful about these types of activity, beyond the specific discussions and knowledge sharing about the project and the opportunity to talk to other people working in the sector, is that there is invariably some tool or technique that gets used in the project or programme meetings that you can take away and use more widely. I think I’ve blogged before about the Harvard Elevator pitch from a previous Jisc programme meeting.

This time we were taken through an approach of carrying out a review of the project a couple of years hence, where you had to imagine that the project had failed totally. It hadn’t delivered anything that was useful, so no product, tool or learning came out of the project. It was a complete failure.

We were then asked to try to think about reasons why the project had failed to deliver. So we spent half an hour or so individually writing reasons onto post-it notes. At the end of that time we went round the room reading out the ideas and matching them with similar post-it notes, with Ben and Andy sticking them to a wall and arranging them in groups based on similarity.

It quickly shifted away from going round formally to more of a collective sharing of ideas but that was good and the technique really seemed to be pretty effective at capturing challenges. So we had challenges grouped around technology and data, political and community aspects, and legal aspects for example.

We then spent a bit of time reviewing and recategorising the post-it notes into categories that people were reasonably happy with. Then came the challenge of going through each of the groups of ideas and working out what, if anything, the project could or should do to minimise the risk of that possible outcome happening. That was a really interesting exercise to identify some actions that could be done in the project such as engagement to encourage more take up.

A really interesting demonstration of quite a powerful technique that’s going to be pretty useful for many project settings. It seemed to be a really good way of trying to think about potential hurdles for a project and went beyond what you might normally try to do when thinking about risks, issues and engagement.

It’s interesting to me how so many of the good project management techniques work on the basis of working backwards. Whether that is about writing tasks for a One Page Project Plan based on describing the task as if has been completed, e.g. Site launch completed, or whether it is about working backwards from an end state to plan out the steps and the timescale you will have to go through. These both envisage what a successful project looks like, while the pre-mortem thinks about what might go wrong. Useful technique.

I haven’t blogged for a few weeks so it was a bit of a surprise to find that WordPress.com has had a bit of a makeover when I logged back in to it today.  Wordpress screenshotIt’s got a bit of a fresher feel with new colours and fonts, although stuff seems, so far, to mainly be in the same places it was before.

As far as I can tell most of the options and features seem to be the same, but they just look a bit different.  I’m sure I’ll find a few subtle changes, but probably within a few days I’ll have forgotten what things used to be like before.

It reminds me of a comment made by one of the presenters at FOTE last year, that ‘when was the last time that amazon sent someone round to your house and said: ‘sorry, we’ve changed the interface a bit, would you like some training?”.    I don’t recall seeing a message, other than one from WordPress pointing out I’d been blogging for four years (although a bit erratically – in time terms or in terms of content and style maybe?).  But it doesn’t really matter to me that I haven’t read a set of release notes, gone through a training session, or spent time trying stuff out.  Try it out as you go, learn as you go along, dive right in.  As someone who as worked with IT systems for years that’s a bit of an IT trait, but it looks like something that is now a mainstream expectation.  Don’t expect me to learn your interface first, expect me to try it and learn as I go along.  If I don’t like it, I might tell you, might live with it, or might switch to an alternative.

Twitter posts

Categories

Calendar

May 2017
M T W T F S S
« Mar    
1234567
891011121314
15161718192021
22232425262728
293031  

Creative Commons License