You are currently browsing the tag archive for the ‘Stanford University’ tag.

Photograph of sparrows in a baarn doorway It was Lorcan Dempsey who I believe coined the term, ‘Full library discovery’ in a blog post last year. As a stage beyond ‘full collection discovery’, ‘full library discovery’ added in results drawn from LibGuides or library websites, alongside resource material from collections.  So for example a search for psychology might include psychology resources, as well as help materials for those pyschology resources and contact details about the subject librarian that covers psychology.  Stanford and Michigan are two examples of that approach, combining lists of relevant resources with website results.

Princeton’s new All search feature offers a similar approach, discussed in detail on their FAQ.  This combines results from their Books+, Articles+, Databases, Library Website and Library Guides into a ‘bento box’ style results display. Princeton all searchPrinceton’s approach is similar to the search from North Carolina State University who I think were about the first to come up with this style.

Although in most of these cases I suspect that the underlying systems are quite different the approach is very similar.  It has the advantage of being a ‘loosely-coupled’ approach where your search results page is drawn together in a ‘federated’ search method by pushing your search terms to several different systems, making use of APIs and then displaying the results in a dashboard-style layout.  It has the advantage that changes to any of the underlying systems can be accommodated relatively easily, yet the display to your users stays consistent.

For me the disadvantages for this are in the lack of any overriding relevancy ranking for the material and that it perpetuates the ‘silo’ing’ of content to an extent (Books, Articles, Databases etc) which is driven largely by the underlying silos of systems that we rely on to manage that content.  I’ve never been entirely convinced that users understand the distinction about what a ‘database’ might be.  But the approach is probably as good as we can get until we get to truly unified resource management and more control over relevancy ranking.

Going beyond ‘full library discovery’
But ‘full library discovery’ is still very much a ‘passive’ search tool, and by that I mean that it isn’t personalised or ‘active’.  At some stage to use those resources a student will be logging in to that system and that opens up an important question for me.  Once you know who the user is, ‘how far should you go to provide a personalised search experience?’.  You know who they are, so you could provide recommendations based on what other students studying their course have looked at (or borrowed), you might even stray into ‘learning analytics’ territory and know what the resources were that the highest achieving students looked at.

You might know what resources are on the reading list for the course that student is studying – so do you search those resources first and offer those up as they might be most relevant?  You might even know what stage a student has got to in their studies and know what assignment they have to do, and what resources they need to be looking at.  Do you ‘push’ those to a student?

How far do you go in assembling a profile of what might be ‘recommended’ for a course, module or assignment, what other students on the cohort might be looking at, or looked at the last time this course ran?  Do you look at students previous search behaviour?  How much of this might you do to build and then search some form of ‘knowledge base’ with the aim of surfacing material that is likely to be of most relevance to a student.  Search for psychology on NCSU’s Search All search box gives you the top three articles out of 2,543,911 articles in Summon, and likely behaviour is not to look much beyond the first page of results.  So should we be making sure that they are likely to be the most relevant ones?

But, then there’s serendipity, there’s finding the different things that you haven’t looked for before, or read before, because they are new or different.  One of the issues with recommendations is the tendancy for them to be circular, ‘What gets recommended gets read’ to corrupt the performance indicator mantra.  So how far do you go?  ‘Mind reading search’ anyone?

A couple of reflections on two aspects of data that came to mind this week.  Firstly tools to manipulate data from that growing range of datasets; and secondly some reflections on data for decision-making.

Data wrangling
Standford University have released an alpha version of a nifty tool to manipulate data, the Data Wrangler  It’s a tool that allows you to take some data, paste it into their tool and then play around with reformating it.  Stanford Data Wrangler screenshot It seems to be a really powerful tool that lets you select data in a cell then uses what it knows about that data to offer suggested transformations.  They demonstrate selecting a US state name and it highlighting all the state names and showing them in a separate column. 

When you play around with the tool you see that when you select something within the data it offers you a series of suggestions for extracting the data into a separate column.  It lets you go through a series of steps to transform the data into something much more usable. Stanford University Data Wrangler screenshot 2It looks like a really good device to help tidy up a dataset into something that can much more easily be used in a spreadsheet or turned into a visualisation.

There’s a really good introductory video here.  At the moment you can only use the tool by going to their website and pasting in some text but the intention is to ultimately release the tool as open source.  One definitely to keep an eye on.

Data and decision making
Just before the summer we had our annual library awayday (a sort of stayawayday as it was on campus this year) and we all split up into teams and spent the day playing a business management game.  So we all ran a series of leisure/fitness centres, made decisions on fees/charges, staff, marketing etc etc and went through of a series of six-monthly business cycles to see the effect of our decisions. 

The key to the game was that we were all presented with a series of sheets of data, covering balance sheets, market intelligence, profit and loss accounts etc.  And each round we had an updated set of data to use in the next round.  The game was interesting and at the end of several business cycles there was a wide variation in how successful each of the businesses had been.  If I remember correctly, anything from a million pound profit to a million pound loss.

That made me think about data.  There’s a business mantra that points to the importance of ‘facts and data’, and yes, I agree that facts and data are important when managing any business or service.  But it’s only half the story.  The data has to be relevant, accurate and meaningful.  And even if the data is accurate it has to be interpreted properly and the correct business decisions made.  So in our game everyone started with the same facts and data, but made different decisions, leading to radically different outcomes.  Which seems to me that the critical thing is making sure that the right decisions are being made on the data.

Twitter posts

Categories

Calendar

August 2017
M T W T F S S
« Jun    
 123456
78910111213
14151617181920
21222324252627
28293031  

Creative Commons License