You are currently browsing the tag archive for the ‘HE’ tag.

Website developments
Most websites go through a process of continued development and change.  I’d argue that they are never really finished, it’s just that every so often they get to the stage where you want to start again from a clean (or clean’ish) slate.  Often the motivation is as much a feeling of we can’t do what we want to do with the site now, mixed with there’s other sites out there that look better, have better navigation or features for example.  I’d also say that there’s an element of website ‘fashion’ that make sites look out-of-date even when they are perfectly functional but that it can drive user perceptions of a site.  So refreshing a site at regular intervals is something I’d want to do.  In between regular site redevelopments there will always be a myriad of minor or even major changes to the site and it is one of the challenges of website management to keep track of (and manage) these changes.  To do that we’ve developed a process around a Website Improvement Plan.

The Website Improvement Plan
In essence the Website Improvement Plan is a document that outlines the steps you are going to take to address issues with the site and it works to shape the development of the site over a period of time.  We only use the plan to record significant changes on the site – so we wouldn’t record minor content changes but would record an activity like adding a completely new section of content with several pages.  There is quite a variation in the scope of the activities – so it can range from a task to implement a new search system, or to add a rolling news feature to the home page, for example.  We use the plan as a rolling document across each year updating it as we go along.  We have an approval process to get new items into the plan and we record the following information about each item:

  • a unique reference number for each item
  • a description of the item
  • the date it was added to the plan
  • details of the resources needed to deliver the item and a note if it forms part of a project
  • the owner of the item – the person responsible for the item
  • the date the change is required
  • a category – we split tasks into categories such as Content; Infrsastructure; User Experience etc
  • details of when the item was approved
  • a status column – updated monthly that is used to track progress of the task

We then colour-code each item to make it easy to check which items are at which stage.  So pending items are unhighlighted, approved but not yet started (yellow highlight), in progress (green), completed (grey) as seen below

Website Improvement Plan
Pros and Cons of this approach
The processes around approving items to go into the plan and reviewing them (done monthly at a Web Quality Improvement Group meeting) help to focus people’s attention on the work taking place around the website and let people identify where there are obvious gaps in work that is planned.  We feed in activities from various projects and strategic plans to take care of the higher-level activities.  It’s an open document so it is available to all library staff to view and it’s reported to various library management groups.  As a tool to bring together everything we are doing it seems to work.

On the negative side – there is some added bureaucracy around managing the plan and some potential delays in making sure that people have signed off developments before they take place.    There is a danger that you can end up managing the plan rather than the service.

Other thoughts
We keep version control over the document so we can compare different versions of it.  This means that we can track the number of tasks that are completed or in progress and how that changes across the year as a form of fairly crude metric around website development.  Within the unit we also have activity data from staff that we can use to identify the cost of website services.  For us the process seems to work reasonably OK for day-to-day website management.

I was intrigued earlier this week to find that when asked to report back on a few things that were going on – that the response was an expression of concern by a couple of people that they hadn’t heard about some of these things.  As I recall it was the information that http://data.open.ac.uk had released course materials as linked data.  That information came out via twitter from various people associated with the Lucero project late on Friday.  Just as the Open Bibliographic Data JISC website came out in the same way (via tweets from people, often Re-tweeted) late last week.

I must admit to being surprised at the reaction, and thinking about it there was a subtext of why isn’t this information being released properly, through proper channels. But the more I think about it, and think about how I use twitter, which is particularly to find out what is going on in universities, and university libraries and JISC and the general HE domain, then it’s increasingly the way that I find out about what is going on.  People I follow tweet links to new things they think will be interesting often because it interested them, or tweet what they are working on or have done.   Twitter gives me that information much more so than emails or mailing lists these days, and I don’t use facebook (which I guess I’ve pigeon-holed as being personal rather than professional), and although I’m on LinkedIn that’s more ‘professional’ than informational.

But as these types of activities move to micro-blogging there are people who are put off twitter because of the ‘what I had for lunch’ tweets (and worse).  But I’ve always thought that to be a strange approach – like saying that I’m not going to watch TV ever because ITV broadcasts ‘X Factor’, or never use a phone because people use it to talk about trivial things.  Twitter is a communications tool used by humans – so all human life is there – just as there is everywhere else.  If it works for you fine, if not that’s also fine – but if you want to know what’s happening now and is important to people who are doing interesting things, then it’s a very useful tool

Two things this week made me think about how the HE library landscape might be changing in the next few years.  One, was the SCONUL Shared Services event, the other was Lorcan Dempsey’s keynote presentation from the emtacl conference in Norway that is now on the web (http://www.ntnu.no/ub/emtacl/?programme).

The connection between the two was that both saw a potential future for libraries in the networked environment where shared services had a part to play.  One of the comments that is sometimes made in relation to shared services in HE is that things are different in HE because HE institutions are competing with each other.  I’d question how much of the institution’s competitive edge the library represents and I’d argue that the distinctive elements are more likely to be in their customer service, expertise or collection quality rather than their systems.

Amongst his remarks on the network effect and how much that will lead to changes Lorcan pointed to examples from the commercial world where companies like Netflix buy-in services from Amazon who are a competitor, because of Amazon’s expertise, which represent the best available.  In the library world Lorcan saw much duplication of activities around libraries with “redundant, complex systems apparatus will have to be simplified” with “a move to shared infrastructure in the cloud”

The SCONUL Shared Services event covered a model for how Shared Services might work for HE libraries.  The day was spent talking about the Shared Services study Business Case that was submitted to Hefce towards the end of 2009 requesting pathfinder funding. (http://helibtech.com/file/view/091204%20SCONUL%20Shared%20Service%20-%20for%20distribution.pdf)

SCONUL Shared Services domain model

In brief the report proposed a three phase set of developments:

1.  the creation of a national-level managed ERM system that would manage national level subscriptions to resources and a national ERM licensing service – rather than each HE library having their own ERM processes

2. the creation of a Discovery to Delivery service that would combine the national ERM entitlement data with authentication and search services at a national level

3. removing duplication with library management systems, using the national-level authentication infrastructure and inter-operating with institutional systems

The expectation is that savings would be made by reducing duplication in licensing and rights management, by saving on the cost of e-licence deals by negotiating national subscriptions (going beyond the opt-in model) – so individual HE libraries would do less rights/licensing work, there would be savings in the cost of local ERM systems, licensing staff and Search/LMS costs/LMS support staff.

Unfortunately Hefce have not approved the request for pathfinder money so SCONUL are looking at what options there are to move this forward.  There was the feeling at the meeting that the ERM element could be an achievable step, although a lot of detail still needs to be sorted out.  The suggestion was made that progress could be made if enough HEIs were prepared to contribute a small amount to getting it off the ground.    The general view was that we should be doing this, but probably more realistically not as a single large project but as a step by step process.  JISC and SCONUL are keen to move it on but it isn’t all that clear how it might be funded.

Thinking about the proposals there are a few things that strike me about it:

  • I wonder about the realism of the timetable – mainly in relation to whether this can happen quickly enough given the likely scenario of major cuts in funding for the sector.
  • I must admit to a slight sense of déjà-vu – having sat through a lot of the MLA Stock Services review in public libraries – and seen that go from proposals for shared services to something that just ran into the ground – then I’m interested to see how the HE library sector tackles something like this and whether it has any more success with instituting such a major network-scale change.
  • Others are a lot more qualified to comment on the technical practicality of some of the developments but I find it strange that while there are several examples of shared library management systems (LLC, SELMS for example), there are few in HE these days.

It will be interesting to see what happens over the next few months and what opportunities there are to get involved.

I had the opportunity to go and listen to Martin Weller (@mweller on twitter) and Nick Pearce (@drnickpearce) talking about their work on Digital Scholarship this morning.  I’d put together some thoughts last year on an earlier blog post – Digital scholarship and the challenges for libraries – so it was good to get an update on how the work is moving forward.

Digital Scholarship context
Nick Pearce set the context for Digital Scholarship with a short presentation – available on slideshare here.  Looking at technology first he set out the view that books and language could be viewed as ‘technologies’.   Books as a technology wasn’t too contentious for a room full of library people.  Language as a technology is a bit more of a stretch but if you view it as a tool to enable change in a community then it’s a good analogy.   His comment that ‘old technologies often persist – for good reasons’ was particularly interesting and the classic example is radio continuing alongside TV.   But I’d wonder if these two technologies are fulfilling exactly the same role or whether they have established different roles for themselves. 

Turning to digital technologies he pointed to the large number and wide variety of services.  Using Ludwig Gatzke’s image of the incredible range of web 2.0 services as an illustration of how this year’s favourite technology is next year’s history.  Many of the services shown in the image no longer exist and the list doesn’t show services such as twitter that are currently very popular. 

That points to a real risk that you choose to adopt a technology platform that turns about to be transient or you find that ‘a year later everyone has moved on’.

Nick then looked at some of the key features of the digital environment and suggested that only a small number of users were actually creating content (which gives me pause for thought given the enormous growth that sites like YouTube are experiencing with user-generated content), and that you are relying on sites that are in perpetual evolution, effectively constantly in beta-testing.

 

Technologies, issues and challenges
Turning to scholarship and using Boyer’s “Scholarship reconsidered” model we started to briefly look at what technologies, issues and challenges might present themselves for the four elements of Boyer’s model.

  • Discovery
  • Integration
  • Application
  • Teaching

Ideas that came up include the ever-increasing amount of data (data deluge), challenges in economics and funding, and issues around social networking.   Nick went on to give some examples of Open Data (e.g. datacite.org), Open Publishing (the Open access movement), Open engagement through blogs and twitter feeds from people such as Richard Dawkins, and Open education (Open Learn and OERs).

“the Open Scholar is someone who makes their intellectual projects and processes digitally visible and who invites and encourages ongoing criticism of their work and secondary uses of any or all parts of it–at any stage of its development.”  Academic Evolution

Digital Scholarship work
Martin Weller then took us through the work that is being carried out to investigate digital scholarship.  This comprises three elements:

  • promote digital scholarship
  • work on recognition 
  • research current practice

It was interesting to hear of the work to create a new digital scholarship hub DISCO that is being launched shortly, and good to get a brief preview of it.  Martin talked about his aim to formulate some kind of ‘magical metric to measure digital scholarship’ and it would be interesting to see how this sort of scoring system could be used – take the scorecard along to your appraisal with the results?   Aims included trying to decide what represented good public engagement and working on case studies that academics could use as part of their promotion case. 

Martin briefly covered some of the issues around digital scholarship including issues around rights, skills, plagiarism, time and quality/depth.  We then spent a little time looking at issues, benefits and what we’d like to change.  The sorts of things that our group talked about included: difficulties of getting people to engage; the lack of awareness of what the technology can do and concerns about quality in comparing peer-reviewed journals with blogs, for example.  For the library we thought there was a fit around the library increasingly focusing on electronic rather than print resources but there are challenges around managing and curating access to material in social networking environments that may be ephemeral.   The issue of persistent identifiers to this type of material is a real concern.

Finally, in an all too brief session, Martin flagged up the JISC Digital Scholarship ‘Advanced Technologies for Research’ event on 10 March 2010.

Reflections
It was interesting that the presenters had slightly different perspectives on Digital Scholarship.   It would have been good to have a bit more time to talk through some of the discussions and have more feedback, but time was a bit limited.  It is fascinating to hear at first hand some of the work that is taking place to map out equivalencies between traditional academic practice and potentially new academic practices.  It would be good to get some of the counter-arguments as to why some people don’t think that blogs and suchlike are equivalent to traditional practice.

For libraries the issues are especially around discovery and providing access to the material.  A colleague made the point that librarians can’t evaluate the content in a blog as they don’t have the subject knowledge.  At present evaluation of resources is as much down to evaluating the quality of the publishing medium, e.g. it’s in Nature or a reputable resource so it should be appropriate.  With blogs librarians don’t have that context to use.

And the other big issue for libraries is persistence of links.  A whole technology industry has grown up around these problems e.g. SFX, OpenURLs, DOIs etc etc and work is going to be needed to work out the implications of content migrating from a few hundred aggregrated collections of peer-reviewed academic journals to many thousands of individual resources in the cloud.  But maybe this is where technologies such as Mendeley come in?

I recently had the opportunity to go to the European Conference on Digital Libraries.   This three-day conference, held in Corfu this year, has been running for a number of years and is of interest to a range of different disciplines, from computer scientists to librarians and archivists.  This year there were around 400 attendees, mainly from Europe but with a sprinkling of attendees from Asia and the Americas.   With two keynote speakers, 30 papers and a range of posters and demonstrations the conference covered a good deal of ground and I’m going to put down a few of my impressions.

Overall impressions
ECDL is quite scientific and academic in approach.  Although the conference sub-title was ‘Digital Societies’ most of the papers approached the subject of Digital Libraries from quite a narrow technical view, investigating areas such as DL software performance, semantic search techniques or the impact of data-loss on images.  A significant number of the papers were from Doctoral Students and many others outlined the latest state of research by groups of researchers.

‘The days are past when scholarly authority alone determines what is saved, learned and used’
The two keynote papers from Gary Marchionini and Peter Buneman provided two quite different perspectives on Digital Libraries.  Marchionini’s paper flagged up how social networking is likely to impact on future Digital Libraries whilst Peter Buneman concentrated mainly on the need for Digital Libraries to reach out to the research communities and offer them a way of safeguarding their research materials.

Both saw the long-term storage cost of DLs as a significant issue and identified preservation as key issues for the future.  Marchionini identified key differences between curated and community DLs, whilst Buneman concentrated on the importance of curation of research material as a key challenge.

Other conference papers
The conference also had a number of interesting sessions including a paper on the performance of DL systems  – Fedora, DSpace and Greenstone – building large collections of material and stress-testing the systems to analyse their performance.   Some of the papers presented covered quite complex technical solutions and models or methods, e.g. such as using visualisation or sound clues to aid search and retrieval.

Final thoughts
Although the conference sub-title was ‘Digital Societies’ on reflection few papers other than the keynotes really got to grips with much beyond the technical issues surrounding Digital Libraries.  The impact of Web 2.0 was touched upon but much of the content was still around building and managing monolithic database structures to contain vast repositories of data.  Whilst that is a challenging activity in itself, the domain perhaps needs to move to a wider discussion of the impact of ‘Digital Societies’  It is possibly with this in mind that there are moves planned to change the name of the conference from 2011.

Feedback from the JISC ‘Modelling the Library Domain’ workshop in June is now available on the web at http://librarydomainmodel.jiscinvolve.org/ A few interesting comments amongst the feedback about the definitions.

Some good suggestions about next steps, particularly about providing some more explanation and clarification/exemplars to make it easier for people to understand the applicability of the Model

Twitter posts

Categories

Calendar

June 2017
M T W T F S S
« Mar    
 1234
567891011
12131415161718
19202122232425
2627282930  

Creative Commons License