You are currently browsing the category archive for the ‘Uncategorized’ category.

User stories imageIt’s intriguing how long it takes for a concept to rise and fall and the persistence of some ideas in the face of evidence that contradicts them.  Digital natives, the idea (suggested by Marc Prensky) that younger people are intrinsically able to function effectively in a digital world, by dint of being born at a time of digital abundance, is an idea that has spread out from the academic world and now seems established in the minds of many, for example quoted in this article from the BBC, and in this piece from Goldman Sachs taking data from Pew Research.  Yet within academic research this concept has been shown to be a myth.  A new paper by Kirschner and De Bruyckere ‘The myths of the digital native and the multitasker’ (abstract available at https://doi.org/10.1016/j.tate.2017.06.001) reviews much of the recent research and concludes that there isn’t evidence that younger people are digital natives.  In their words

“though learners in this generation have only experienced a digital connected world, they are not capable of dealing with modern technologies in the way which is often ascribed to them (i.e., that they can navigate that world for effective and efficient learning and knowledge construction).” (Kirschner & De Bruyckere 2017)

So Digital Natives – it’s not a thing.  It’s more complicated.

I wonder whether part of this might be a misunderstanding by non-academics when taking concepts from the academic world.  The ‘Scientific method‘ where researchers create a hypothesis that they test and then refine or change as a result of testing seems to confuse lay people into thinking that academics are always changing their minds, when it’s a process of enquiry where knowledge moves forward by theorising, testing and refining.

So it makes me wonder about typology, a process of categorising things into types.  Another example from recently suggested that there’s a linguistic method of distinguishing between Baby Boomers and Millenials by noting how they respond when someone says thank you.   Baby Boomers (defined as people born 1946-1964) are likely to say ‘You’re welcome’, while Millenials (1982-2002) are likely to say ‘No problem’ and there’s the suggestion that saying the ‘wrong’ response could be seen as annoying.  It interested me as I’m likely to respond with ‘No problem’ yet theoretically sit in the earlier category but am conscious that I probably wouldn’t have used ‘no problem’ when I was younger.

Typology is particularly prevalent in work around personality types and you see it most frequently in psychometric testing.  Much like digital natives it has become quite pervasive and established, with tests like Myers Briggs being regularly used.  Yet psychology researchers have moved away from this approach in favour of thinking about personality traits such as Big Five now.  Although practitioners seem convinced of the value of these psychometric tests the research pretty consistently sheds doubt on the validity, describing them alongside learning styles, as neuromyths. (e.g. eDekker et al ‘Neuromyths in education: Prevalence and predictors of misconceptions among teachers‘ Frontiers in psychology 2012).

But it is fascinating how these theories get embedded and adopted and then become difficult to shake off when the academic world has moved on to something else, has abandoned that theory as it doesn’t seem to fit the evidence.  The attractiveness of typology is also interesting.  I can see how there is a convenience factor at work here of grouping into types and I see it in the tendancy in web analytics for ‘segmentation’ and the use of personas in UX work to stand as a representation of a ‘user type’.   But…  this all increasingly suggests to me that when you are looking at categorisation you are looking at something much more fluid, where users might move from category to category, depending on numerous factors – what they are doing maybe, and we’re using the categories as much as a use case to test how a product might work for that scenario.

wooden chart

Wooden chart tool created for a programme on data featuring Hans Rosling

One of the great things about new projects is that they offer the opportunity to learn new skills as well as build on existing knowledge.  So our new library data project is giving plenty of opportunities to learn new things and new tools to help with data extraction and data analysis.

MySQL workbench
After a bit of experimentation about the best method of getting extracts of library data (including trying to doing it through Access) we settled on using MySQL Workbench version 6.3 with read-only access to the database tables storing the library data.  It’s been a bit of a learning curve to understand the tool, the SQL syntax and the structure of our data but direct access to the data means that the team can extract the data needed and quickly test out different options or extracts of data.  In the past I’ve mainly used tools such as Cognos or Oracle Business Inteligence which essentially hide the raw SQL queries behind a WYSIWYG interface, so it’s been interesting to use this approach.  It’s been really useful to be learning the tool with the project team, because it means that I can get SQL queries checked to make sure they are doing what I think they are doing, and to share queries across the team.

In the main I’m running the SQL query and checking that I’ve got the data I want but then exporting the data as .csv to do further data tidying and cleaning in MS Excel.  But I have learnt a few useful things including how add in an anonymised ID as part of the query (useful if you don’t need the real ID but just need to know which users are unique and much easier to do in SQL than in Excel).

Excel
I’ve certainly learnt a lot more about Excel.  It’s been the tool that I’ve used to process the data extracts, to join data together from other sources and (for the time being at least) to present tables and visualisations of the data.  Filtering and pivot tables have been the main techniques, with frequent use of pivot tables to filter data and provide counts.  Features such as Excel 2013’s pivot table ‘distinct count’ have been useful.

One of the tasks I’ve been doing in Excel is to join two data sources together, e.g. joining counts of library use via ezproxy and by athens, or joining library use with data on student results.   I’d started mainly using VLOOKUP in Excel but have switched (on the recommendation of a colleague) to using INDEX/MATCH as it seems to work much better (if you can get the syntax exactly right.

The project team is starting to think that as we learn more about SQL that we try to do more of the data manipulation and counts directly through the SQL queries as doing them in Excel can be really time-consuming.

SPSS
SPSS has been a completely new tool to me.  We’re using IBM SPSS Statistics version 21 to carry out the statistical analyses.  Again it’s got a steep learning curve and I’m finding I need frequent recourse to some of the walk-throughs on sites such as Laerd statistics e.g. https://statistics.laerd.com/spss-tutorials/one-way-anova-using-spss-statistics.php  But I’m slowly getting to grips with it and as I get more familiar with it I can start to see more of the value in it.  Once you’ve got the data into the data table and organised properly it’s really quick to run correlation or variance tests, although it quickly starts to flag up queries about, which test to use and why, and what do the results mean?  I particularly like the output window that it uses to track all the actions and show any charts you’ve created or analyses you’ve undertaken on the data.

What’s next?
The team is in the early stages of exploring the SAS system that is used for our institutional data warehouse.  Ultimately we’d want to get library use data into the institutional data warehouse and then query it alongside other institutional data directly from the warehouse.  SAS apparently has statistical analysis capabilities but the learning curve seems to be fairly high.  We’ve also thought about whether tools such as Open Refine might be useful for cleaning up data but haven’t been able to explore that yet.  Similarly I know we have a need for tools to present and visualise the data findings – ultimately that might be met by an institutional SAS Visual Analytics tool.

 

Plans are worthless, but planning is everything. Dwight D. Eisenhower

I’ve always been intrigued about the differences between ‘plans’ and ‘planning’ and was taken by this quote from President Dwight D. Eisenhower.  Talking to the National Defense Executive Reserve Conference in 1957 and talking about how when you are planning for an emergency it isn’t going to happen in the way you are planning, so you throw your plans out and start again.  But, critically, planning is vital, in Eisenhower’s own words “That is the reason it is so important to plan, to keep yourselves steeped in the character of the problem that you may one day be called upon to solve–or to help to solve.”  There’s a similar quote generally attributed to Winston Churchill (although I’ve not been able to find an actual source for it)   “Plans are of little importance, but planning is essential”

Bird flocks and sunsetMany of the examples of these sort of quotes seem to come from a military background, along the lines that no plan will survive contact with reality.  But the examples I think also hold true for any project or activity.  Our plans will need to adapt to fit the circumstances and will, and must, change.  Whereas a plan is a document that outlines what you want to do, it is based on the state of your knowledge at a particular time, often before you have started the activity.  It might have some elements based on experience of doing the same thing before, or doing a similar thing before, so you are undertaking some repeatable activity and will have a greater degree of certainty about how to do X or how long Y will take to do.  But that often isn’t the case.  So it’s a starting point, your best guess about the activity.  And you could think about a project as a journey, with the project plan as your itinerary.  You might set out with a set of times for this train or that bus, but you might find your train being delayed or taking a different route and so your plan changes.

So you may start with your destination, and a worked out plan about how to get there.  But, and this is where planning is important, some ideas about contingencies or options or alternative routes in case things don’t quite work out how your plan said they should.  And this is the essence of why planning is important in that it’s about the process of thinking about what you are going to do in the activity.  You can think about the circumstances, the environment and the potential alternatives or contingencies in the event that something unexpected happens.

For me, I’m becoming more convinced that there’s a relationship around project length and complexity and a window/level at which you can realistically plan in terms of level of detail and how far in advance you can go.  At a high level you can plan where you want to get to, what you want to achieve and maybe how you measure whether you’ve achieved what you want to – so, you could characterise that as the destination.  But when it comes to the detail of anything that involves any level of complexity, newness or innovation, then the window of being able to plan a detailed project plan (the itinery) starts have a shorter and shorter window of certainty.  A high-level plan is valuable, but expect that the detail will change.  But then shorter time periods of planning seem to be more useful – becoming much more akin to the agile approach.

So when you’re looking at your planned activity and resource at the start of the project and then comparing it with the actual resource and activity then often you’ll find there’s a gap.  They didn’t pan out how you expected at the start, well, they probably wouldn’t and maybe shouldn’t.  Part way into the project you know much more than when you started, as Donald Rumsfeld put it “Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones”

As you go through your project, those ‘unknown unknowns’ become known, even if at some stages and in some projects it’s akin to turning over stones to find more stones, and so on, but on your journey you build up a better picture and build better plans for the next cycle of activity.  (And if you really need to know the differences between Planned and Actuals you can use MS Project and can baseline your plan and then re-baseline it to track how the plan has changed over time).

Two interesting pieces of news came out yesterday with the sale of 3M library systems to Bibliotecha http://www.blibliotecha.com and then the news that Proquest were buying ExLibris.  For an industry take on the latter news look at http://www.sr.ithaka.org/blog/what-are-the-larger-implications-of-proquests-acquisition-of-exlibris/

From the comments on twitter yesterday it was a big surprise to people, but it seems to make some sense.  And it is a sector that has always gone through major shifts and consolidations.  Library systems vendors always seem to change hands frequently.  Have a look at Marshall Breeding’s graphic of the various LMS vendors over the years to see that change is pretty much a constant feature. http://librarytechnology.org/mergers/

There are some big crossovers in the product range, especially around discovery systems and the underlying knowledge bases.  Building and maintaining those vast metadata indexes must be a significant undertaking and maybe we will see some consolidation.  Primo and Summon fed from the same knowledge base in the future maybe?

Does it help with the conundrum of getting all the metadata in all the knowledge bases?  Maybe it puts Proquest/ExLibris in a place where they have their own metadata to trade?  But maybe it also opens up another competitive front.

It will be intersting to see what the medium term impact will be on plans and roadmaps.  Will products start to merge, will there be less choice in the marketplace when libraries come round to chosing future systems?

 

 

A fascinating couple of articles over the last few days around what is happening with ebook sales (from the US).  A couple of articles from the Stratechery site (via @lorcanD and @aarontay) Disconfirming ebooks and Are ebooks declining, or just the publishers.  Firstly referring to an article in the NY Times reporting on ebook sales plateau’ing, but then a more detailed piece of work from Author Earnings analysing more data.  The latter draws the conclusion that it was less a case of ebook sales plateauing but more a case that the market share from the big publishers was declining (and postulating that price increases might play a part).  Overall the research seems to show growth in independent and self-publishing but what looks like fairly low levels of growth overall.  The figures mostly seem to be about market share rather than hard and fast sales per se.  But interesting nonetheless to see how market share is moving away from ‘traditional’ print publishers.

The Stratechery articles are particularly interesting around the way that ebooks fit with the disruptive model of new digital innovation challenging traditional industries, what is termed here ‘Aggregation theory‘  [As an aside it’s interesting from the Author Earnings article to note that many of the new ebooks from independent or self-publishers don’t have ISBNs.  What does that imply for the longer term tracking of this type of material?    Already I suspect that they are hard to acquire for libraries and just don’t get surfaced in the library acquisitions sphere. Does it mean that these titles are likely to become much more ephemeral?]

The conclusion in the second Stratechery article I find particularly interesting, that essentially ebooks aren’t revolutionising the publishing industry in terms of the form they take.  They are simply a digital form of the printed item.  Often they add little extra by being in digital form, maybe they are easier to acquire and store, but often in price terms they aren’t much cheaper than the printed version.  Amazon Kindle does offer some extra features but I’ve never been sure how much they are taken up by readers. Unlike music you aren’t seeing books being disaggregated into component parts or chapters (although it’s a bit ironic considering that some of Charles Dickens’ early works, such as The Pickwick Papers, were published in installments, as part works).  But I’d contend that the album in music isn’t quite the same as a novel for example.  Music albums seem like convenient packaging/price? of a collection of music tracks (possibly with the exception of ‘concept’ albums?) for a physical format, whereas most readers wouldn’t want to buy their novels in parts.  There’s probably more of a correlation between albums/tracks and journals/articles – in that tracks/articles lend themselves in a digital world to being the lowest level and a consumable package of material.

But I can’t help but wonder why audiobooks don’t seem to have disrupted the industry either.  Audible are offering audiobooks in a similar way to Netflix but aren’t changing the book industry in the way the TV and movie industry are being changed.  So that implies to me that there’s something beyond the current ‘book’ offering (or that the ‘book’ actually is a much more consumable, durable package of content than other media).   Does a digital ‘book’ have to be something quite different that draws on the advantage of being digital – linking to or incoporating maps, images, videos or sound, or some other form of social interaction that could never be incorporated in a physical form?   Or are disaggregated books essentially what a blog is (modularization as suggested on stratechery)?  Is the hybrid digital book the game-changer?  [there are already examples of extra material being published online to support novels – see Mark Watson’s Hotel Alpha stories building on his novel Hotel Alpha, for example.]   You could liken online retailers as disrupting the book sales industry as a first step but we’re perhaps only in the early stages of seeing how Amazon will ultimately disrupt the publishing industry.  Perhaps the data from Author Earnings report points to the signs of the changes in ebook publishers.

OpenTree sample badgeOne of the interesting features of our new library game OpenTree for me is that it is possible to engage with it in a few different ways.  Although at one level it’s about a game, with points and badges for interacting with the game and with library content, resources and webpages.  It’s social so you can connect with other people and review and share resources.

But, as a user you can choose the extent that you want to share.  So you can choose to share your activity with all users in OpenTree, or restrict it so only your friends can see your activity, or choose to keep your activity private.  You can also choose whether or not things you highlight are made public.

So you’d wonder what value you’d get in using it if you make your activity entirely private.  But you can use it as a way of tracking which library resources you are using.  And you can organise them by tagging them and writing notes about them so you’ve got a record of the resources you used for a particular assignment.  You might want to keep your activity private if you’re writing a paper and don’t want to share your sources or if you aren’t so keen on social aspects.

If you share your activities with friends and maybe connect with people studying the same module as you, then you could see some value in sharing useful resources with fellow students you might not meet otherwise.  In a distance-learning institution with potentially hundreds of students studying your module, students might meet a few students in local tutorials or on module forums but might never connect with most people following the same pathway as themselves.

And some people will be happy to share, will want to get engaged with all the social aspects and the gaming aspects of OpenTree.  It will be really interesting to see how users get to grips with OpenTree and what they make of it and to hear how people are using it.

It will particularly be interesting to see how our users engagement with it might differ from versions at bricks-and-mortar Universities at Huddersfield, Glasgow and Manchester.  OpenTree’s focus is online and digital so doesn’t include loans and library visits, and our users are often older, studying part-time and not campus-based.

Subject leaderboard screenshotIn early feedback, we’re already seeing a sense that some of the game aspects, such as the Subject leaderboard is of less interest than expected.  Maybe that reflects students being focused around outcomes much more, although research seems to suggest (Tomlinson 2014 ‘Exploring the impact of policy changes on students’ attitudes and approaches to learning in higher education’ HEA) that this isn’t just a factor for part-time and distance-learning students as a result of increased university fees and student loans.  It might also be that because we haven’t gone for an individual leaderboard that there’s less personal investment, or just that users aren’t so sure what it represents.

 

 

The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 11,000 times in 2014. If it were a concert at Sydney Opera House, it would take about 4 sold-out performances for that many people to see it.

Click here to see the complete report.

Highland cow and calf at InversnaidI’m always looking to find out about the tools and techniques that people are using to improve their websites, and particularly how they go about testing the user experience (or UX) to make sure that they can make steady improvements in their site.

So I’m a regular follower of some of the work that is going on in academic libraries in the US (e.g. Scott Young talking about A/B testing and experiments at Montana , and Matt Reidsma talking about Holistic UX).    It was particularly useful to find out about the process that led to the three headings on the home page of Montana State University library, and the stages that they went through before they settled on Find, Request and Services.Montana State University Library website homepage  A step-by-step description showing the tools and techniques is a really valuable demonstration of how they went about the process and how they made their decision.  It is interesting to me how frequently libraries seem not to pick the right words to describe their services, and don’t pick words that make sense to their users.  But it’s really good to see an approach that largely gets users to decide on what works by testing what works, rather than asking users what they prefer.

Something else that I came across the other week was the term ‘guerilla testing’ applied to testing the usability of websites (I think that probably came from the session on ‘guerilla research’ that Martin Weller and Tony Hirst ran the other week that I caught up with via their blog posts/recording).  That led on to ‘Guerilla testing‘ on the Government Service Design Manual (there’s some sense of slight irony for me about guerilla testing being documented – in some detail – in a design manual) – but the way it talks through the process, its strengths and weaknesses is really useful and it made me think about the contrast between that approach and the fairly careful and deliberate approach that we’ve been talking with our work over  the last couple of months.  Some things to think about.

Reflections on our approach
It’s good to get an illustration from Montana of the value of the A/B testing approach.  It’s a well-evidenced and standard approach to web usability but I know that it is something that we’ve found difficult to use in a live environment as it makes our helpdesk people anxious that they aren’t clear what version of the website customers might be seeing.  So we’ve tended to use step by step iterations rather than straightforward A/B testing.  But something I think to revisit.

The piece of work that we’re concentrating on at the moment is to look at student needs from library search.  We know it’s a pain point for students, we know it’s not ‘like Google’ and isn’t as intuitive as they feel it should be.  So we’re trying to gain a better understanding of what we could do to make it a better experience (and what we shouldn’t do).  So we’re working with a panel of students who want to work with the library to help create better services.

The first round tested half a dozen typical library resource search scenarios against eight different library search tools (some from us and some from elsewhere) with around twenty different users.   We did all our testing as remote 1:1 sessions using Teamviewer software (although you could probably use Skype or a number of alternatives) and were able to record the sessions and have observers/note takers.  We’ve assessed the success rate for each scenario against each tool and also measured the average time it took to complete each task with each tool (or the time before people gave up).  These are giving us a good idea of what works and what doesn’t.

For the second round a series of wireframes were created using Balsamiq to test different variants of search boxes and results pages.  So we ran a further set of tests again with the panel and again remotely.  We’ve now got a pretty good idea of some of the things that look like they will work so have started to prototype a real search tool.  We’re now going to be doing a series of iterative development cycles, testing tools with students and then refining them.  That should greatly improve our understanding of what students want from library search and allow us to experiment with how we can build the features they want.

Picture of flowerOK, so it’s the time of year to reflect back on the last year and look forward to the new year.

Blogging
I’ve definitely blogged less (24 posts in 2013 compared with 37 in 2012 and 50 in 2011), [mind you the ‘death of blogging’ has been announced, and here and there seem to be fewer library bloggers than in the past – so maybe blogging less is just reflecting a general trend].  Comments about blogging are suggesting that tumblr, twitter or snapchat are maybe taking people’s attention (both bloggers and readers) away from blogs.  But I’m not ‘publishing’ through other channels particularly, other than occasional tweets, so that isn’t the reason for me to blog less.  There has been a lot going on but that’s probably not greatly different from previous years.  I think I’ve probably been to less conferences and seminars, particularly internal seminars, so that has been one area where I’ve not had as much to blog about.

To blog about something or not to blog about it
I’ve been more conscious of not blogging about some things that in previous years I probably would have blogged about.  I don’t think I blogged about the Future of Technology in Education conference this year, although I have done in the past.  Not particularly because it wasn’t interesting because it was, but perhaps a sense of I’ve blogged about it before and might just be repeating myself.   With the exception of posts about website search and activity data I’ve not blogged so much about some of the work that I’ve been doing.  So I’ve blogged very little about the digital library work although it (and the STELLAR project) were a big part of some of the interesting stuff that has been going on.

Thinking about the year ahead
I’ve never been someone that sets out predictions or new year resolutions.  I’ve never been convinced that you can actually predict (and plan) too far ahead in detail without too many variables fundamentally changing those plans.  There’s a quote attributed to various people along the lines that ‘no plan survives contact with the enemy’ and I’d agree with that sentiment.  However much we plan we are always working with an imperfect view of the world.  Circumstances change and priorities vary and you have to adapt to that.   Thinking back to FOTE 2013 it was certainly interesting to hear BT’s futureologist Nicola Millard describe her main interest as being the near future and of being more a ‘soon-ologist’ than a futureologist.

What interests (intrigues perhaps) me more is less around planning but more around ‘shaping’ a future, so more change management than project management I suppose.  But I think it is more than that, how do those people who carve out a new ‘reality’ go about making that change happen.  Maybe it is about realising a ‘vision’ but assembling a ‘vision’ is very much the easy part of the process.  Getting buy-in to a vision does seem to be something that we struggle with in a library setting.

On with 2014
Change management is high on the list for this year.  We’ve done a certain amount of the ‘visioning’ to get buy-in to funding a change project.  So this year we’ve work to do to procure a complete suite of new library systems (the first time I think here for 12 years or so), in a project called ‘Library Futures’ that also includes some research into student needs from library search and the construction of a ‘digital skills passport’.  I’ve also got continuing work on digital libraries/archives as we move that work from development to live, alongside work with activity data, our library website and particularly work with integrating library stuff much more into a better student experience.  So hopefully some interesting things to blog about.  And hopefully a few new pictures to brighten up the blog (starting with a nice flower picture from Craster in the summer).

One of the different elements of working in an academic library as opposed to a public library is that writing an article to be published in a proper ‘academic’ journal becomes more likely.  It becomes something you might do whereas in the past it wouldn’t have been something I would have particularly considered.  Articles for ‘trade’ publications maybe, possibily in one of the library technology journals perhaps.  But not something that was particularly high up on the list of things to do.

As an aside I’ve felt that the importance of journals (or serials) is one of the biggest differences between public and academic libraries.  The whole journal infrastructure (both technical and publishing aspects) weren’t ever particularly prominent in the agenda of a public library.   It’s interesting though to find that there’s now a pilot to provide public walk-in access to academic journals through public libraries.   I will be fascinated to see how that pilot turns out as my experience in public libraries was that we rarely had any demand for widespread academic journal access over and above the odd inter-library loan article request.  So it will be interesting to see what demand they see and how it might be promoted to build up an audience for this material.  My suspicion has long been that the reason for the lack of demand was that library users simply didn’t have an expectation that it might be possible.

Going through the publication process for an article (even as a co-author) has been a useful experience in helping me understand more about the publishing process that academics have to go through as part of their professional life.  Faced with the practical decisions about whether to go open access and pay an article processing charge (APC), or publish in a subscription journal (a choice between author pays or customer pays) throws a sharp focus on the practical implications of Green or Gold and Open Access.  Getting a copy of an early version of the document into the institutional repository was another task that had to be included.

It’s been interesting to see how the focus on publication swiftly turns to a list of things to do to promote the article, such as setup your identity on Google Scholar and link your publication to it (fascinating for me in that it showed up a report for a project as well as a long forgotten dissertation listed in Worldcat).   But also things to do like establishing an Orcid ID (that put me in mind of LinkedIn for academics for some reason) and then linking your publication to it.  Although the importance of citations was something I’ve been aware of (and I work at one of a few UK academic libraries with a bibliometrician post), it certainly does make you realise how critical it is for an academic’s reputation and how their career depends on their papers being cited when you realise that there are a list of things to do to promote your article.

Categories

Calendar

May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Creative Commons License