You are currently browsing the category archive for the ‘technology’ category.

Many academic libraries have invested in web-scale discovery systems such as EBSCO Discovery or Primo (to name just two) and many will have also built lists of library resources for library users, whether in LibGuides or in other bespoke systems.  Often these products will be combined with IP authentication systems (such as EZProxy) to connect directly to library resources.  But for that approach to work requires library users to be on-campus and/or logged into the campus network or have found their way to (and through) the relevant library system that can give them the direct link to that resource.    But that approach essentially forces users to go through a library gateway that seems to me to replicate the print-based concept of a library, where the user has to be physically present to make use of the resources.  And that approach doesn’t really seem to gel with the multi-device, network-scale, digital world that our users inhabit with their need to access what they need to from wherever they are.

If your users aren’t starting their search in the library, but are finding resources via google, or from references, how do they get access to the resource?   We’ve seen often enough in almost any of our discovery system testing that what users want is to find the thing they need and straight-away get a link to the PDF.   How do libraries get closer to that? There is the federated access approach where users login at the point of access to the resource.  But users can often struggle to notice the relevant login link on the publishers ‘paywall’ page and then have to tackle the ‘where are you from’ federated access management game.  Feedback from users suggests that users are pretty frustrated even to see the paywall page asking for an amount to view the article and don’t always even realise that there might be a route there to to the article without paying.  The publisher-led RA21 initiative is piloting improvements to this experience with some proof-of-concept work to look at ways of making the experience better for users.  It’s an approach that has raised some concerns, particularly around privacy implications.

For a while now there have been some other approaches.  A number of libraries (including the OU) have offered tools (typically bookmarklets that plug into a browser) to help users find the right link by rewriting the publisher URL as an ‘ezproxied’ alternative.  Such tools have had a small take-up but require some maintenance to cope with continued updates to browsers.   Utrecht, one of the pioneers of alternative approaches offer such a tool with their Get Access button.   Arising from the Utrecht work the LEAN Library Access browser extension has been developed as a commercial product and has already been taken up by Manchester and others.  As well as connecting users to the ezproxied version of the resource, the browser extension also offers Library Assist to provide customised support messages tailored to different resources and Library Alternatives, linking to open access versions.  One of the advantages of the LEAN approach is that maintaining the tool to cope with browser changes doesn’t have to be done by the library.

Kopernio is another approach.  It has been around in beta for a little while and is another browser extension.  It offers integration with Google Scholar and PubMed and will add a PDF link into Google Scholar for example.  It also offers a place to store your PDFs ‘MyLocker’.  You can also associate it with an institution and once you login in it looks like it stores login details in the browser.  Kopernio also searches for open access material, stating that it indexes ‘a range of additional data sources from which PDFs can be retrieved: Open access publishers, Institutional repositories, Pre-print servers, Google Scholar and your Kopernio search history’.  It’s a freemium model, so there are limits on the free version (storage limits for example) and there’s a premium version coming soon, aimed at both researchers and institutions.  It has been developed by the original creators of Mendeley, so it comes from a different perspective to the library-derived apporaches.  It has certainly picked up on the researcher need for one-click access to PDFs and it offers a Library Guides feature that gives a customised guide to using Kopernio for your institution.   Kopernio seems to be available for Chrome at the moment.

It will be interesting to see what the take up of these types of browser tools might be, and particularly with there being two different models, with LEAN targeting libraries to buy into a subscription while Kopernio offers a freemium route to drive adoption.  What I think is particularly fascinating with the tools is the way that open access content is embedded into these tools and therefore into the workflow of users.  We are seeing it to an extent with discovery systems, in that they are adding more open access content into their knowledge bases, in some cases by harvesting open access content aggregators such as CORE.  With open access increasing in importance it is good to see that innovations are appearing that pull open access and subscription material together.

Advertisements

Libraries have long been contemplating the implications of a shift from print to digital and underlying that thinking is the perception of print and digital being very much a binary choice.  But is that necessarily the case?   A research project into ‘next generation paper’ reported by the University of Surrey and the Open University envisages some form of hybrid between print and digital, where links or buttons exist within the physical paper to connect to digital materials.

The concept of interactive paper has been around for a while as this article from the New Scientist from ten years ago shows.    So does this type of technology fundamentally change the way libraries need to think about print?    Does it provide print with a new purpose and greater longevity?  Combining the convenience of a portable format of material with a means to link directly to digital content.  Is that anything better than a smarter QR code?  Does it just replicate the inflexibility of printed material that can’t be updated with new links or changed with new information? Or could it be a route to maintaining the relevance of the printed document by linking to the latest information in digital form.

For libraries it potentially makes a stronger connection between print and digital content with maybe a need to describe the relationship between the materials in a different way.  They are related to each other and also depend on each other.    An interesting development and it will be interesting to see how and if that technology starts to appear in the mainstream.

 

The news, reported in an article by Marshall Breeding in American Libraries, that EBSCO has decided to support a new open source library services platform is a fascinating development.  To join with Kuali OLE but to develop what will essentially be a different open source product is a big development for the library technology sector.     It’s particularly interesting that EBSCO has gone the route of providing financial support to an open source system, rather than buying a library systems company.  The scope and timescales are ambitious, to have something ready for 2018.

Open source library management systems haven’t have the impact that systems like Moodle have had in the virtual learning environment sector and in some ways it is odd that academic libraries haven’t been willing to adopt such a system, given that universities do seem to have an appetite for open source software.   Maybe open source library systems products haven’t been developed sufficiently to compete with commercial providers.    Software as a Service (SaaS) is coming to be accepted now by corporate IT departments as a standard method of service provision, something that I think a couple of the commercial providers realised at quite an early stage, so it is good to see this initiative recognising that reality.  It will be interesting to see how this develops

I was particularly interested in a term I came across in a blog post on innovation on the Nesta blog the other week.  Innovation in the public sector: Is risk aversion a cause or a symptom? The blog post talks about Organisation Debt and Organisational Physics and is a really interesting take on why large organisations can struggle with innovation.  It’s well worth a read.  It starts with referencing the concept of ‘technical debt‘ described in the blog post as “… where quick fixes and shortcuts begin to accumulate over time and eventually, unless properly fixed, can damage operations.”  It’s a term that tends to be related to software development but it started me thinking about how a concept of ‘technical debt’ might be relevant to the library world.

If we expand the technical debt concept to the library sector I’d suggest that you could look at at least three areas where that concept might have some resonance:  library systems, library practices and maybe a third one around library ‘culture’ – potentially a combination of collections, services and something of the ‘tradition’ of what a library might be.

Library systems
Our systems are a complex and complicated mix.  Library management systems, E-resources management systems, discovery, openURL resolvers, link resolvers, PC booking systems etc etc  It can be ten years or more between libraries changing their LMS and although, with Library Services Platforms, we are seeing some consolidation of systems into a single product, there is still a job to do of integrating legacy systems into the mix.    For me the biggest area of ‘technical debt’ comes in our approach to linking and websites.  Libraries typically spend significant effort in making links persistent, in coping with the transition from one web environment to the other by redirecting URLs.  It’s not uncommon to have redirection processes in place to cope with direct links to content in previous websites and trying to connect users directly to replacement websites.  Yet on the open web ‘link rot‘ is a simple fact of life.  Trying to manage these legacy links is a significant technical debt that libraries carry I’d suggest.

Library practices
I think you could point to several aspects of library practices that could fall under the category of technical debt but I’d suggest the primary one is in our library catalogue and cataloguing practices.  Our practices change across the years but overall the quality of our older records are often lower than what we’d want to see.  Yet we typically carry those records across from system to system.  We try to improve them or clean them up, but frequently it’s hard to justify the resource being spent in ‘re-cataloguing’ or ‘retrospective cataloguing’.  Newer approaches making use of collective knowledge bases and linking holdings to records has some impact on our ability to update our records, but the quality of some of the records in knowledge bases can sometimes also not be up to the level that libraries would like.

Library culture
You could also describe some other aspects of the library world as showing the symptoms of technical debt.  Our physical collections of print resources, increasingly unmanaged and often unused as constrained resources are directed to higher priorities, and more attention is spent on building online collections of ebooks for example.  You even, potentially see a common thread with the whole concept of a ‘library’ – the popular view of a library as a place of books means that while libraries develop new services they often struggle to change their image to include the new world.

The end of 2015 and the start of 2016 seems to have delivered a number of interesting reports and presentations relevant to the library technology sphere.  So Ken Chad’s latest paper ‘Rethinking the Library Services Platform‘ picks up on the lack of interoperability between library systems as does the new BiblioCommons report on the public library sector ‘Essential Digital Infrastructure for Public Libraries in England‘ commenting that “In retail, digital platforms with modular design have enabled quickly-evolving omnichannel user experiences. In libraries, however, the reliance on monolithic, locally-installed library IT has deadened innovation”.

As ‘Rethinking the Library Services Platform‘ notes, in many ways the term ‘platform’ doesn’t really match the reality of the current generation of library systems.  They aren’t a platform in the same way as an operating system such as Windows or Android, they don’t operate in a way that third-parties can build applications to run on the platform.  Yes, they offer integration to financial, student and reference management systems but essentially the systems are the traditional library management system reimagined for the cloud.  Much of the changes are a consequence of what becomes possible with a cloud-based solution.  So their features are shared knowledge bases, with multi-tenanted applications shared by many users as opposed to local databases and locally installed applications.   The approach from the dwindling number of suppliers is to try to build as many products as possible to meet library needs.  Sometimes that is by developing these products in-house (e.g. Leganto) and sometimes by the acquisition of companies with products that can be brought into the supplier’s eco-system.  The acquisition model is exactly the same as that practised by both traditional and new technology companies as a way of building their reach.  I’m starting to view the platform as much more in line with the approach that a company like Google will take with a broad range of products aiming to secure customer loyalty to their ecosystem rather than that of another company.  So it may not be so surprising that technology innovation, which to my mind seems largely to be driven by vendors innovating to deliver to what they see as being library needs and shaped by what vendors think they see as an opportunity, isn’t delivering the sort of platform that is suggested.  As Ken notes, Jisc’s LMS Change work discussed back in 2012 the sort of loosely-coupled, library systems component approach giving libraries to ability to integrate different elements to give the best fit to their needs from a range of options.  But in my view options have very much narrowed since 2012/13.

The BiblioCommons report I find particularly interesting as it includes within it an assessment of how the format silos between print and electronic lead to a poor experience for users, in this case how ebook access simply doesn’t integrate into OPACs, with applications such as Overdrive being used that are separate to the OPAC, e.g. Buckinghamshire library services ebooks platform, and their library catalogue are typical.  Few if any public libraries will have invested in the class of discovery systems now common in Higher Education (and essentially being proposed in this report), but even with discovery systems the integration of ebooks isn’t as seamless as we’d want, with users ending up in a variety of different platforms with their own interfaces and restrictions on what can be done with the ebook.  In some ways though, the public library ebook offer, that does offer some integration with the consumer ebook world of Kindle ebooks, is better then the HE world of ebooks, even if the integration through discovery platforms in HE is better.  What did intrigue me about the proposal from the BiblioCommons report is the plan to build some form of middleware system using ‘shared data standards and APIs’ and that leads to wondering whether that this can be part of the impetus for changing the way that library technology interoperates.  The plan includes in section 10.3 the proposal to  ‘deliver middleware, aggregation services and an initial complement of modular applications as a foundation for the ecosystem, to provide a viable pathway from the status quo towards open alternatives‘ so maybe this might start to make that sort of component-based platform and eco-system a reality.

Discovery is the challenge that Oxford’s ‘Resource Discovery @ The University of Oxford‘ report is tackling.   The report by consultancy, Athenaeum 21, looks at discovery from the perspective of a world-leading research institution, with large collections of digital content and looks at connecting not just resources but researchers with visualisation tools of research networks, advanced search tools such as elastic search.  The recommendations include activities described as ‘Mapping the landscape of things’, Mapping the landscape of people’, and ‘Supporting researchers established practices’.  In some ways the problems being described echo the challenge faced in public libraries of finding better ways to connect users with content but on a different scale and includes other cultural sector institutions such as museums.

I also noticed a presentation from Keith Webster from Carnegie Mellon University ‘Leading the library of the future: W(h)ither technical services?  This slidedeck takes you through a great summary of where academic libraries are now and the challenges they face with open access, pressure on library budgets and changes in scholarly practice.   In a wide-ranging presention it covers the changes that led to the demise of chains like Borders and Tower records and sets the library into the context of changing models of media consumption.    Of particular interest to me were the later slides about areas for development that, like the other reports, had improving discovery as part of the challenge.   The slides clearly articulate the need for innovation as an essential element of work in libraries (measured for example as a % of time spent compared with routine activities) and also of the value of metrics around impact, something of particular interest in our current library data project.

Four different reports and across some different types of libraries and cultural institutions but all of which seem to me to be grappling with one issue – how do libraries reinvent themselves to maintain a role in the lives of their users when their traditional role is being erroded or when other challengers are out-competing with libraries – whether through improving discovery or by changing to stay relevant or by doing something different that will be valued by users.

 

 

 

 

 

SunsetIn the early usability tests we ran for the discovery system we implemented earlier in the year one of the aspects we looked at were the search facets.   Included amongst the facets is a feature to let users limit their search by a date range.  So that sounds reasonably straight-forward, filter your results by the publication date of the resource, narrowing your results down by putting in a range of dates.  But one thing that emerged during the testing is that there’s a big assumption underlying this concept.  During the testing a user tried to use the date range to restrict results to journals for the current year and was a little baffled why the search system didn’t work as they expected.  Their expectation was that by putting in 2015 it would show them journals in that subject where we had issues for the current year.  But the system didn’t know that issues that were continuing and therefore had a date range that was open-ended were available for 2015 as the metadata didn’t include the current year, just a start date for the subscription period.  So consequently the system didn’t ‘know’ that the journal was available for the current year.  And that exposed for me the gulf that exists between user and library understanding and how our metadata and systems don’t seem to match user expectations.  So that usability testing session came to mind when reading the following blog post about linked data.

I would really like my software to tell the user if we have this specific article in a bound print volume of the Journal of Doing Things, exactly which of our location(s) that bound volume is located at, and if it’s currently checked out (from the limited collections, such as off-site storage, we allow bound journal checkout).

My software can’t answer this question, because our records are insufficient. Why? Not all of our bound volumes are recorded at all, because when we transitioned to a new ILS over a decade ago, bound volume item records somehow didn’t make it. Even for bound volumes we have — or for summary of holdings information on bib/copy records — the holdings information (what volumes/issues are contained) are entered in one big string by human catalogers. This results in output that is understandable to a human reading it (at least one who can figure out what “v.251(1984:Jan./June)-v.255:no.8(1986)”  means). But while the information is theoretically input according to cataloging standards — changes in practice over the years, varying practice between libraries, human variation and error, lack of validation from the ILS to enforce the standards, and lack of clear guidance from standards in some areas, mean that the information is not recorded in a way that software can clearly and unambiguously understand it.  From https://bibwild.wordpress.com/2015/11/23/linked-data-caution/ the Bibliographic Wilderness blog

Processes that worked for library catalogues or librarians i.e. in this case the description v.251(1984:Jan./June)-v.255:no.8(1986) need translating for a non-librarian or a computer to understand what they mean.

It’s a good and interesting blog post and raises some important questions about why, despite the seemingly large number of identifiers in use in the library world (or maybe because) it is so difficult to pull together metadata and descriptions of material to consolidate versions together.   It’s an issue that causes issues across a range of work we try to do, from discovery systems, where we end up trying to normalise data from different systems to reduce the number of what seem to users to be duplicate entries to work around usage data, where trying to consolidate usage data of a particular article or journal becomes impossible where versions of that article are available from different providers, or from institutional repositories or from different URLs.

browzine magazine shelfWe’ve started using BrowZine (browzine.com) as a different way of offering access to online journals.  Up until recently there were iOS and Android app versions but they have now been joined by a desktop version.

BrowZine’s interesting as it tries to replicate the experience of browsing recent copies of journals in a physical library.  It links into the library authentication system and is fed with a list of library holdings.  There are also some open access materials in the system.

You can browse for journals by subject or search for specific journals and then view the table of contents for each journal issue and link straight through to the full-text of the articles in the journals.  In the app versions you can add journal titles to your personal bookshelf (a feature promised for the desktop version later this year) and also see when new articles have been added to your chosen journals (shown with the standard red circle against the journal on the iOS version).

A useful tool if there are a selection of journals that you need to keep up to date with.  Certainly the ease with which you can connect with the full-text contrasts markedly with some of the hoops that we seem to expect users to cope with in some other library systems.

OpenTree sample badgeOne of the interesting features of our new library game OpenTree for me is that it is possible to engage with it in a few different ways.  Although at one level it’s about a game, with points and badges for interacting with the game and with library content, resources and webpages.  It’s social so you can connect with other people and review and share resources.

But, as a user you can choose the extent that you want to share.  So you can choose to share your activity with all users in OpenTree, or restrict it so only your friends can see your activity, or choose to keep your activity private.  You can also choose whether or not things you highlight are made public.

So you’d wonder what value you’d get in using it if you make your activity entirely private.  But you can use it as a way of tracking which library resources you are using.  And you can organise them by tagging them and writing notes about them so you’ve got a record of the resources you used for a particular assignment.  You might want to keep your activity private if you’re writing a paper and don’t want to share your sources or if you aren’t so keen on social aspects.

If you share your activities with friends and maybe connect with people studying the same module as you, then you could see some value in sharing useful resources with fellow students you might not meet otherwise.  In a distance-learning institution with potentially hundreds of students studying your module, students might meet a few students in local tutorials or on module forums but might never connect with most people following the same pathway as themselves.

And some people will be happy to share, will want to get engaged with all the social aspects and the gaming aspects of OpenTree.  It will be really interesting to see how users get to grips with OpenTree and what they make of it and to hear how people are using it.

It will particularly be interesting to see how our users engagement with it might differ from versions at bricks-and-mortar Universities at Huddersfield, Glasgow and Manchester.  OpenTree’s focus is online and digital so doesn’t include loans and library visits, and our users are often older, studying part-time and not campus-based.

Subject leaderboard screenshotIn early feedback, we’re already seeing a sense that some of the game aspects, such as the Subject leaderboard is of less interest than expected.  Maybe that reflects students being focused around outcomes much more, although research seems to suggest (Tomlinson 2014 ‘Exploring the impact of policy changes on students’ attitudes and approaches to learning in higher education’ HEA) that this isn’t just a factor for part-time and distance-learning students as a result of increased university fees and student loans.  It might also be that because we haven’t gone for an individual leaderboard that there’s less personal investment, or just that users aren’t so sure what it represents.

 

 

OpenTree badge examplesOne of the projects that we’ve been working on as part of our Library Futures programme has been a product called OpenTree.  OpenTree is based on the Librarygame software from a small development team at ‘Running in the Halls’.  Librarygame adds gaming and social aspects to student engagement with library services.

Librarygame was developed originally as Lemontree for Huddersfield University (https://library.hud.ac.uk/lemontree/) and then updated and adopted as librarytree and BookedIn for Glasgow and Manchester Universities respectively (https://librarytree.gla.ac.uk/ and https://bookedin.manchester.ac.uk/).

Being originally based around engagement with physcial libraries and taking data from library loans from the library management system, or from physical library visits, via building access logs, the basic game model had to change a bit for a distance-learning University where students don’t visit the University library or borrow books.

OuOpenTree screenshotr main engagement with students is their use of online library resources and library websites.  Fortunately most of our resource access goes through EZProxy so we were able to find a way of allowing users who sign-up to the game to get points for the resources they access.  We’ve also been able to add javascript tracking onto a couple of our websites to give students points for accessing those sites.

OpenTree gives users points for accessing resources and points build up into levels in the game.  Activities such as making friends, reviewing, tagging and sharing resources also get you badges in the game.  We’ve also added in a Challenges section to highlight activities to encourage users to try out different things, trying Being Digital, for example.

Because it lists library resources you’ve been accessing I’ve already been finding it useful as a way of organising and remembering library resources I’ve been using, so we’re hopeful that students will also find it useful and really get into the social aspects.

OpenTree launches to students in the autumn but is up-and-running in beta now.  A video introducing OpenTree is on YouTube at: https://www.youtube.com/watch?v=yeSU0FwVNvU

We’re really looking forward to seeing how students get on with OpenTree and already have a few thoughts about enhancements and developments, and no doubt other ideas will come up once more people start using it.

 

 

 

 

 

So we’re slowly emerging from our recent LMS project and a bit of time to stop and reflect, partly at least to get project documentation for lessons learned and suchlike written up and the project closed down.  We’ve moved from Voyager, SFX and EBSCO Discovery across to Alma and Primo.  We went from a project kick off meeting towards the end of September 2014 to being live on Alma at the end of January 2015 and live on Primo at the end of April.

So time for a few reflections about some of the things to think about from this implementation.  I’d worked out the other day that it has been the fifth LMS procurement/implementation process I’ve been involved with, and doing different roles and similar roles in each of them.  For this one I started as part of the project team but ended leading the implementation stage.

Reflection One
Tidy up your data before you start your implementation.  Particularly your bibliographic data but if you can other data too.  You might not be able to do so if you are on an older system as you might not have the tools to sort out some of the issues.  But the less rubbish you can take over to your nice new system the less sorting out you’ve got to do on the new system.  And when you are testing your initial conversion too much rubbish makes it harder to see the ‘wood for the trees’, in other words work out what are problems that you need to fix by changing the way the data converts and what is just a consequence of poor data. With bibliographic data the game has changed, you are now trying to match your data with a massive bibliographic knowledge base.

Reflection Two
It might be ideal to plan to go live with both LMS and Discovery at the same time but it’s hard to do.  The two streams often need the same technical resources at the same time.  Timescales are tight to get everything sorted in time.  We decided that we needed to give users more notice of the changes to the Discovery system and make sure there was a changeover period by running in parallel.

Reflection Three
You can move quickly.  We took about four months from the startup meeting to being live on Alma but it means that you have a very compressed schedule.  Suppliers have a well-rehearsed approach and project plan but it’s designed as a generic approach.  There’s some flexibility but it’s deliberately a generic tried-and-tested approach.  You have to be prepared to be flexible and push things through as quickly as possible.  There isn’t much time for lots of consultation about decisions, which leads to…

Reflection Four
As much as possible, get your decisions about changes in policies and new approaches made before you start.  Or at least make sure that the people on the project team can get decisions made quickly (or make them themselves) and can identify from the large numbers of documents, guidance and spreadsheets to work through, what the key decisions you need to make will be.

Reflection Five
Get the experts who know about each of the elements of your LMS/Discovery systems involved with the project team.  There’s a balance between having too many and too few people on your project team but you need people who know about your policies, processes, practices and workflows, your metadata (and about metadata in general in a lot of detail to configure normalisation, FRBR’ization etc etc), who know about your technology and how to configure authentication and CSS.  Your project team is vital to your chances of delivering.

Reflection Six
Think about your workflows and document them.  Reflect on them as you go through your training.  LMS workflows have some flexibility but you still end up going through the workflows used by the system.  Whatever workflows you start with you will no doubt end up changing or modifying them once you are live.

Reflection Seven
Training.  Documentation is good.  Training videos are useful and have the advantage of being able to be used whenever people have time.  But you still need a blended approach, staff can’t watch hours of videos, and you need to give people training about how your policies and practices will be implemented in the new LMS.  So be prepared to run face to face sessions for staff.

Reflection Eight
Regular software updates.  Alma gets monthly software updates.  Moving from a system that was relatively static we wondered about how disruptive it would be.  Advice from other Libraries was that it wasn’t a problem.  And it doesn’t seem to be.  There are new updated user guides and help in the system and the changes happen over the weekend when we aren’t using the LMS.

Reflection Nine
It’s Software as a Service so it’s all different.  I think we were used to Discovery being provided this way so that’s less of a change.  The LMS was run previously by our corporate IT department so in some senses it’s just moved from one provider to another.  We’ve a bit less control and flexibility to do stuff with it but OK, and on the other hand we’ve more powerful tools and APIs.

Refelection Ten
Analytics is good and a powerful tool but build up your knowledge and expertise to get the best out of it.  We’ve reduced our reports and do a smaller number than we’d thought we need.  Scheduled reports and widgets and dashboards are really useful and we’re pretty much scratching the surface of what we can do.  Access to the community reports that others have done is pretty useful especially when you are starting.

Refelection Eleven
Contacts with other users are really useful.  Sessions talking to other customers, User Group meetings and the mailing lists have all been really valuable.  An active user community is a vital asset for products not just the open source ones.

and finally, Reflection Twelve
We ran a separate strand to do some user research with students into what users wanted from library search.  This was really invaluable as it gave us evidence to help in the procurement stage, but particularly it helped shape the decisions made about how to setup Primo.  We’ve been able to say: this is what library users want and we have the evidence about it.  And that has been really important in being able to challenge thinking based on what us librarians think users want (or what we think they should want).

So, twelve reflections about the last few months.  Interesting, enlightening, enjoyable, frustrating at times, and tiring.  But worthwhile, achievable and something that is allowing us to move away from a set of mainly legacy systems, not well-integrated, not so easy to manage to a set of systems that are better integrated, have better tools and perhaps as important have a better platform to build from.

Twitter posts

Categories

Calendar

July 2018
M T W T F S S
« Mar    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Creative Commons License

Advertisements