You are currently browsing the category archive for the ‘Lucero’ category.

Latest project
From February I’m going to be involved in a new project, STELLARSemantic Technologies Enhancing the Lifecycle of LeArning Resources (funded by JISC).   In some ways the project connects with previous work I’ve been involved with in the Lucero project in that it will be employing linked data, and will be working with learning materials, in that I’ve had some involvement with our production and presentation learning systems through the VLE.  But STELLAR will be dealing with a different area for me, in that we’ll be looking at my institution’s store of legacy learning materials.   So it’s a good opportunity to learn more about curation and preservation and digital lifecycles.

STELLAR is particularly going to be looking at trying to understand the value of those legacy learning materials by talking to the academics who have been involved in creating those materials.   There are quite a few reasons why older course materials may still have value, they might be able to be reused in new courses on the basis that reusing old materials might be less costly than creating new materials.  They might have value in being able to be transformed into Open Educational Resources.  Or, for example, they might have value in being good historical examples of styles of teaching and learning.  So STELLAR will be exploring different types and models of expressing the value of those materials.

Finding out about the value that is placed on these materials can also be an important factor when trying to understand which materials to preserve as a priority, or where you should expend your resources, and we’d hope that STELLAR would help to inform HE policies as institutions build up increasing amounts of digital learning materials.

As part of STELLAR we will be taking some digital legacy learning material and transforming it into linked data (with some help from our friends in KMi). This gives us the opportunity to connect old course materials into the OU’s ecosystem by linking to existing datasets on current courses and OER material in OpenLearn.  By transforming the content in this way we can then explore whether making it more discoverable changes the value proposition, makes the content more likely to be reused or opens up other possibilities.  It should be an interesting project and one that I’m looking forward to, as there are going to be a lot of opportunties to build up my understanding of these issues and aspects.

I’ve been to a couple of presentations in the last month about the Lucero linked data project, this is a JISC-funded project run by the OU’s Knowledge Media Institute, that has been working to publish a fairly wide range of university material as linked data.   One presentation by the Project Director Mathieu d’Aquin covering the wider project aspects to a university-wide audience, the other by the Project Manager, Owen Stephens, to a library audience. 

It’s a project I’ve been fortunate enough to have some involvement with and it has some impressive achievements for a short project.   Establishing as the first University-wide linked data repository, being able to release a range of different datasets from institutional repositories to course data, and not least, going some way to getting the concepts of linked data out from the laboratory and into an area where they can start to be discussed as a practical technology.

Linked data
For anyone who isn’t familiar with Linked Data it’ described by its proposer Tim Berners-Lee on his website thus:

‘The Semantic Web isn’t just about putting data on the web. It is about making links, so that a person or machine can explore the web of data.  With linked data, when you have some of it, you can find other, related, data’ 

[If you are interested in finding out more about Linked Data then is a reasonable starting place to explore].

I always find it interesting with new technologies how people describe them to other people.  Mathieu described it as essentially publishing a raw database of data onto the web as RDF with the data being addressable using a URI and talked of creating ‘a very big distributed dataspace’  That’s certainly something that is well-illustrated by the ‘traditional’ linked data cloud image (without which no linked data presentation is complete).  From more of a library perspective Owen used the example of Charlotte Bronte as the creator of Jane Eyre as an illustration of the subject, object and predicate ‘triple’.

Libraries and Linked Data
What has been particularly interesting from a library point of view is the way that linked data allows systems to extract data in new ways.  So for example, publishing course materials in RDF format has allowed queries to be created that make it possible to list all courses available in a particular country, something you can’t easily do from current websites.  And you start to see all kinds of possibilities for libraries and search systems.  You are potentially less constrained in having to decide in advance what type of queries users can make of your data.  I was interested in a comment made by Mathieu that the art of expoiting linked data was to build many small applications rather than a few big applications. 

Also last month there was the news that Archives Hub through the LOCAH project have released some of their content as linked data as a proof of concept.   So it seems to me that we are at an early stage for libraries in thinking about how Linked Data can be of use.  Certainly for us one of the things we have to think about is does it mean that we need to start to change our cataloguing practice.  It’s clear that the way we catalogue isn’t ideal if we want to convert our catalogue data to Linked Data. 

The process to decide on how you are going to express your data as Linked Data is quite a time-consuming one and a process that is very much an on-the-fly activity. Which I think is where libraries may start to feel a bit uncomfortable, without the safety net of some clear frameworks. 

I think we’ve a way to go before this type of activity starts to be commonplace, and maybe we need some tools that help us to present our resources in Linked Data more easily.  I think the analogy is obviously the early days of the web when the first website swe built were with raw html.  But it wasn’t long until tools came along such as Frontpage and Dreamweaver that meant you could build sites without knowing too much html. 

But I still think that there is massive potential within  the Linked Data world and libraries need to engage with it and start to build prototypes that can show the benefits.  Certainly I’m hopeful that we’ll have the chance to do some further work in this area with our Digital Library.

Twitter posts



January 2021

Creative Commons License