You are currently browsing the tag archive for the ‘Web 2.0’ tag.

I had the opportunity to go and listen to Martin Weller (@mweller on twitter) and Nick Pearce (@drnickpearce) talking about their work on Digital Scholarship this morning.  I’d put together some thoughts last year on an earlier blog post – Digital scholarship and the challenges for libraries – so it was good to get an update on how the work is moving forward.

Digital Scholarship context
Nick Pearce set the context for Digital Scholarship with a short presentation – available on slideshare here.  Looking at technology first he set out the view that books and language could be viewed as ‘technologies’.   Books as a technology wasn’t too contentious for a room full of library people.  Language as a technology is a bit more of a stretch but if you view it as a tool to enable change in a community then it’s a good analogy.   His comment that ‘old technologies often persist – for good reasons’ was particularly interesting and the classic example is radio continuing alongside TV.   But I’d wonder if these two technologies are fulfilling exactly the same role or whether they have established different roles for themselves. 

Turning to digital technologies he pointed to the large number and wide variety of services.  Using Ludwig Gatzke’s image of the incredible range of web 2.0 services as an illustration of how this year’s favourite technology is next year’s history.  Many of the services shown in the image no longer exist and the list doesn’t show services such as twitter that are currently very popular. 

That points to a real risk that you choose to adopt a technology platform that turns about to be transient or you find that ‘a year later everyone has moved on’.

Nick then looked at some of the key features of the digital environment and suggested that only a small number of users were actually creating content (which gives me pause for thought given the enormous growth that sites like YouTube are experiencing with user-generated content), and that you are relying on sites that are in perpetual evolution, effectively constantly in beta-testing.

 

Technologies, issues and challenges
Turning to scholarship and using Boyer’s “Scholarship reconsidered” model we started to briefly look at what technologies, issues and challenges might present themselves for the four elements of Boyer’s model.

  • Discovery
  • Integration
  • Application
  • Teaching

Ideas that came up include the ever-increasing amount of data (data deluge), challenges in economics and funding, and issues around social networking.   Nick went on to give some examples of Open Data (e.g. datacite.org), Open Publishing (the Open access movement), Open engagement through blogs and twitter feeds from people such as Richard Dawkins, and Open education (Open Learn and OERs).

“the Open Scholar is someone who makes their intellectual projects and processes digitally visible and who invites and encourages ongoing criticism of their work and secondary uses of any or all parts of it–at any stage of its development.”  Academic Evolution

Digital Scholarship work
Martin Weller then took us through the work that is being carried out to investigate digital scholarship.  This comprises three elements:

  • promote digital scholarship
  • work on recognition 
  • research current practice

It was interesting to hear of the work to create a new digital scholarship hub DISCO that is being launched shortly, and good to get a brief preview of it.  Martin talked about his aim to formulate some kind of ‘magical metric to measure digital scholarship’ and it would be interesting to see how this sort of scoring system could be used – take the scorecard along to your appraisal with the results?   Aims included trying to decide what represented good public engagement and working on case studies that academics could use as part of their promotion case. 

Martin briefly covered some of the issues around digital scholarship including issues around rights, skills, plagiarism, time and quality/depth.  We then spent a little time looking at issues, benefits and what we’d like to change.  The sorts of things that our group talked about included: difficulties of getting people to engage; the lack of awareness of what the technology can do and concerns about quality in comparing peer-reviewed journals with blogs, for example.  For the library we thought there was a fit around the library increasingly focusing on electronic rather than print resources but there are challenges around managing and curating access to material in social networking environments that may be ephemeral.   The issue of persistent identifiers to this type of material is a real concern.

Finally, in an all too brief session, Martin flagged up the JISC Digital Scholarship ‘Advanced Technologies for Research’ event on 10 March 2010.

Reflections
It was interesting that the presenters had slightly different perspectives on Digital Scholarship.   It would have been good to have a bit more time to talk through some of the discussions and have more feedback, but time was a bit limited.  It is fascinating to hear at first hand some of the work that is taking place to map out equivalencies between traditional academic practice and potentially new academic practices.  It would be good to get some of the counter-arguments as to why some people don’t think that blogs and suchlike are equivalent to traditional practice.

For libraries the issues are especially around discovery and providing access to the material.  A colleague made the point that librarians can’t evaluate the content in a blog as they don’t have the subject knowledge.  At present evaluation of resources is as much down to evaluating the quality of the publishing medium, e.g. it’s in Nature or a reputable resource so it should be appropriate.  With blogs librarians don’t have that context to use.

And the other big issue for libraries is persistence of links.  A whole technology industry has grown up around these problems e.g. SFX, OpenURLs, DOIs etc etc and work is going to be needed to work out the implications of content migrating from a few hundred aggregrated collections of peer-reviewed academic journals to many thousands of individual resources in the cloud.  But maybe this is where technologies such as Mendeley come in?

Lorcan Dempsey recently shared a link to a presentation from University College Dublin (http://ow.ly/pyM0) about their experiences of using a consultancy to help develop a vision for their library website. The presentation is an interesting reflection on the process and some of the motivations behind why many libraries are grappling with the basic question ‘What should we do about our website?’

UCD identify feedback from users, the growth of their site, Web 2.0 and the increasingly wide range of library online presences  as leading them to the realisation that they needed a strategic view of the library’s online platform. The reasons behind using consultancy services include fairly common reasons such gaining access to expertise and a fresh perspective. It would be particularly interesting to know though whether the consultants had any prior library website experience and what selection criteria were used. For anything like this the choice of selection criteria is really critical for any tender. It is absolutely vital to make sure that the criteria reflect your priorities and there is sufficient granularity and weighting to ensure that you can shortlist and select appropriate suppliers.

The approach by the consultants was a fairly typical one with surveys, workshops and stakeholder interviews. It is quite intriguing that there was a lack of desktop research as that is often a strength of consultancy services. It is curious that feedback from the user survey was given more visibility than workshops and stakeholder interviews. There can be a tendency for stakeholder views to be given more credence.

The presentation identifies quite how inter-connected the library website is with the whole university website strategy and if that isn’t clear then it makes delivering a roadmap for a library website very difficult.

The comments from the surveys are particularly informative. It is fascinating that users aren’t that much interested in Web 2.0. It seems that there’s a small group of enthusiastic adopters but most students aren’t yet convinced that it has any great appeal or relevance (?) to them.

There’s also the often reported comment that users want a simplified ‘google-type’ search box. That’s a quest that has been exercising librarians and library systems people for quite some time and many are looking at the likes of Summon as a solution. I suppose I’d comment here that Google is only recently starting to bring back results that are more than just a list of webpages. It doesn’t really have the challenge of sorting out the range of different types of content and access permissions that libraries have to cope with. Creating a simple search box to search all our content is only part of the answer, the key is more around presenting the search results in a meaningful way that distinguishes between web sites, databases, books, ejournals, full text, abstracts, ebooks and an increasing amount of multimedia content.

Although UCD feel that the process could have achieved more there are some really good reflections that are picked up in their presentation. Key points around the importance of communicating, providing support and training, what user priorities are and of integration into the university web presence are valuable insights for university library websites.

I saw an interesting presentation today on web personalisation and profiling.  Delivered by Dr Nikolaos Nanas from LiSys in Greece.   He talked about the way that the Internet was moving away from being a digital library of content and moving towards more user-generated (and user ‘broadcast’) content.    He described the increasing number of tweets, status updates and sharing as ‘the real-time web’ leading to a broadcast web that might challenge traditional mass media.

With the development of ‘the real-time web’ he sees a key feature being the need to personalise the information stream through a profile.  He suggested that profiles need to be:

  • Media-independent
  • Multi-modal
  • Multi-functional
  • Scalable
  • Dynamic
  • Variable

Dr Nanas described an information filtering system that has been inspired by how biological systems, in particular the immune system, function.  This autopoietic view describes how organisms distinguish between self and non-self as a way of determining essentially what belongs and what doesn’t.  He went on to draw an analogy with Adaptive Information Filtering.  The work led to the development of the Nootropia system.

One of the demonstration systems from the work is available at http://noo.lisys.gr/demoNoo3/noo.jsp This allows you to pull data from RSS feeds while the profile ranks the results according to the profile you build up by clicking on news items.  You can create an account on the system and try it for yourself.

Noo screenshot

Noo screenshot

Demo systems have also been developed for search, using widgets (http://observatories.cereteth.gr) and as a news reader for iphones.

The profiling system uses an analysis of the text in selected articles to build your profile of your interests.  It appears to use some statistical analysis of the frequency of words, the distance between words or phrases being repeated and a hierachical weighting system to calculate the relevance of articles.  That relevance algorithm is used to rank articles.  Dr Nanas did suggest that there was some bias in-built within the system towards the sources that you choose, but as this is learnt by the profile as part of the process then that probably reflects the user preferences for particular sources.

A fascinating session that illustrated how technologists are building search systems that can learn from user behaviour to improve the relevance of search results.  Comparing these sorts of developments with the simple search and catalogue systems that libraries currently reply upon it is evident quite how much scope there is to improve library search.  In many cases library search hasn’t really scratched the surface of personalisation.  Even the new harvested data discovery services such as Summon http://www.serialssolutions.com/summon don’t yet have these sort of in-built profiling systems.  Even where organisations do have data about their users (such as which course they are on) this personal data isn’t being used to inform their interactions with the Library search systems.

Categories

Calendar

May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Creative Commons License