You are currently browsing the category archive for the ‘Analytics’ category.

The end of 2015 and the start of 2016 seems to have delivered a number of interesting reports and presentations relevant to the library technology sphere.  So Ken Chad’s latest paper ‘Rethinking the Library Services Platform‘ picks up on the lack of interoperability between library systems as does the new BiblioCommons report on the public library sector ‘Essential Digital Infrastructure for Public Libraries in England‘ commenting that “In retail, digital platforms with modular design have enabled quickly-evolving omnichannel user experiences. In libraries, however, the reliance on monolithic, locally-installed library IT has deadened innovation”.

As ‘Rethinking the Library Services Platform‘ notes, in many ways the term ‘platform’ doesn’t really match the reality of the current generation of library systems.  They aren’t a platform in the same way as an operating system such as Windows or Android, they don’t operate in a way that third-parties can build applications to run on the platform.  Yes, they offer integration to financial, student and reference management systems but essentially the systems are the traditional library management system reimagined for the cloud.  Much of the changes are a consequence of what becomes possible with a cloud-based solution.  So their features are shared knowledge bases, with multi-tenanted applications shared by many users as opposed to local databases and locally installed applications.   The approach from the dwindling number of suppliers is to try to build as many products as possible to meet library needs.  Sometimes that is by developing these products in-house (e.g. Leganto) and sometimes by the acquisition of companies with products that can be brought into the supplier’s eco-system.  The acquisition model is exactly the same as that practised by both traditional and new technology companies as a way of building their reach.  I’m starting to view the platform as much more in line with the approach that a company like Google will take with a broad range of products aiming to secure customer loyalty to their ecosystem rather than that of another company.  So it may not be so surprising that technology innovation, which to my mind seems largely to be driven by vendors innovating to deliver to what they see as being library needs and shaped by what vendors think they see as an opportunity, isn’t delivering the sort of platform that is suggested.  As Ken notes, Jisc’s LMS Change work discussed back in 2012 the sort of loosely-coupled, library systems component approach giving libraries to ability to integrate different elements to give the best fit to their needs from a range of options.  But in my view options have very much narrowed since 2012/13.

The BiblioCommons report I find particularly interesting as it includes within it an assessment of how the format silos between print and electronic lead to a poor experience for users, in this case how ebook access simply doesn’t integrate into OPACs, with applications such as Overdrive being used that are separate to the OPAC, e.g. Buckinghamshire library services ebooks platform, and their library catalogue are typical.  Few if any public libraries will have invested in the class of discovery systems now common in Higher Education (and essentially being proposed in this report), but even with discovery systems the integration of ebooks isn’t as seamless as we’d want, with users ending up in a variety of different platforms with their own interfaces and restrictions on what can be done with the ebook.  In some ways though, the public library ebook offer, that does offer some integration with the consumer ebook world of Kindle ebooks, is better then the HE world of ebooks, even if the integration through discovery platforms in HE is better.  What did intrigue me about the proposal from the BiblioCommons report is the plan to build some form of middleware system using ‘shared data standards and APIs’ and that leads to wondering whether that this can be part of the impetus for changing the way that library technology interoperates.  The plan includes in section 10.3 the proposal to  ‘deliver middleware, aggregation services and an initial complement of modular applications as a foundation for the ecosystem, to provide a viable pathway from the status quo towards open alternatives‘ so maybe this might start to make that sort of component-based platform and eco-system a reality.

Discovery is the challenge that Oxford’s ‘Resource Discovery @ The University of Oxford‘ report is tackling.   The report by consultancy, Athenaeum 21, looks at discovery from the perspective of a world-leading research institution, with large collections of digital content and looks at connecting not just resources but researchers with visualisation tools of research networks, advanced search tools such as elastic search.  The recommendations include activities described as ‘Mapping the landscape of things’, Mapping the landscape of people’, and ‘Supporting researchers established practices’.  In some ways the problems being described echo the challenge faced in public libraries of finding better ways to connect users with content but on a different scale and includes other cultural sector institutions such as museums.

I also noticed a presentation from Keith Webster from Carnegie Mellon University ‘Leading the library of the future: W(h)ither technical services?  This slidedeck takes you through a great summary of where academic libraries are now and the challenges they face with open access, pressure on library budgets and changes in scholarly practice.   In a wide-ranging presention it covers the changes that led to the demise of chains like Borders and Tower records and sets the library into the context of changing models of media consumption.    Of particular interest to me were the later slides about areas for development that, like the other reports, had improving discovery as part of the challenge.   The slides clearly articulate the need for innovation as an essential element of work in libraries (measured for example as a % of time spent compared with routine activities) and also of the value of metrics around impact, something of particular interest in our current library data project.

Four different reports and across some different types of libraries and cultural institutions but all of which seem to me to be grappling with one issue – how do libraries reinvent themselves to maintain a role in the lives of their users when their traditional role is being erroded or when other challengers are out-competing with libraries – whether through improving discovery or by changing to stay relevant or by doing something different that will be valued by users.

 

 

 

 

 

data.path Ryoji.Ikeda - 3 by r2hox https://www.flickr.com/photos/rh2ox/9990016123/in/photolist-gdMrKi-8Lsgfn-f6XbVk-4heE23-6GiYkV-9u6D1T-5fdsZZ-ax8z4B-9rnTdL-2KfFof-5Ed1xw-22CjS2-gdMuhT-58q7ff-7GaoYw-bf2wtK-688e4k-bf2wpa-i3NECz-5NfuQU-9uDrJR-i3N6NQ-bvtCC4-FMVAH-gdLJnk-ekYLQy-hn7FA2-6EMnyQ-9fLRVv-cS7EZW-dEMUWx-duzsNr-7n3FjS-mWf6or-pVKYKn-nMui2q-fY9CgE-i3NEbc-i3NEP6-92kp1F-7PEwp1-5YiALt-bf2wDV-h4RV3y-5daC49-6ZY5Yw-7R6usE-5BK5qi-ybPYi-dvbgmy

data.path Ryoji.Ikeda – 3 by r2hox
https://flic.kr/p/gdMrKi

One of the pieces of work we’re just starting off in the team this year is to do some in-depth work on library data.  In the past we’ve looked at activity data and how it can be used for personalised services (e.g. to build recommendations in the RISE project or more recently to support the OpenTree system), but in the last year we’ve been turning our attention to what the data can start to tell us about library use.

There have been a couple of activities that we’ve undertaken so far.  We’ve provided some data to an institutional Learning Analytics project on the breakdown of library use of online resources for a dozen or so target modules.  We’ve been able to take data from the EZproxy logfiles, and show the breakdown by student ID, by week and by resource over the nine-month life of the different modules.  That has put library data alongside other data such as use of the Virtual Learning Environment and allowed module teams to  look at how library use might relate to the other data.

Pattern of week by week library use of eresources - first level science course

Pattern of week by week library use of eresources – first level science course

A colleague has also been able to make use of some data combining library use and satisfaction survey data for a small number of modules, to shed a little light on whether satisfied students were making more use of the library than unsatisfied ones (obviously not a causal relationship – but initial indications seem to be that for some modules there does seem to be a pattern there).

Library Analytics roadmap
But these have been really early exploratory steps, so during last year we started to plan out a Library Analytics Roadmap to scope out the range of work we need to do.  This covers not just data analysis, but also some infrastructural developments to help with improving access to data and some effort to build skills in the library.  It is backed up with engagement with our institutional learning analytics projects and some work to articulate a strategy around library analytics.  The idea being that the roadmap activities will help us change how we approach data, so we have the necessary skills and processes to be able to provide evidence of how library use relates to vital aspects such as student retention and achievement.

Library data project
We’re working on a definition of Library analytics as being about:

Using data about student engagement with library services and content to help institutions and students understand and improve library services to learners

Part of the roadmap activity this year is to start to carry out a more systematic investigation into library data, to match it against student achievement and retention data.  The aim is to build an evidence base of case studies, based on quantitative data and some qualitative work we hope to do.  Ideally we’d like to be able to follow the paths mapped out by the likes of Minnesota, Wollongong and Huddersfield in their various projects and demonstrate that there is a correlation between library use, student success and retention.

Challenges to address
We know that we’re going to need more data analysis skills, and some expertise from a statistician.  We also have some challenges because of the nature of our institution.  We won’t have library management system book loans, or details of visits to the library, we will mainly have to concentrate on use of online resources.  So in some ways that simplifies things.  But our model of study also throws up some challenges.  With a traditional campus institution students study a degree over three or four years.  There is a cohort of students that follow through year 1, 2, 3 etc and at the end of that period they do their exams and get their degree classification.  So it is relatively straight-forward to see retention as being about students that return in year 2 and year 3, or don’t drop-out during the year, and to see success measured as their final degree classification.  But with part-time distance learning, where although students sign up to a qualification, they still follow a pattern of modules and many will take longer than six years to complete, often with one of more ‘breaks’ in study, following a cohort across modules might be difficult.  So we might have to concentrate on analysis at the ‘module’ level… but then that raises another question for us.  Our students could be studying more than one module at a time so how do you easily know whether their library use relates to module A or module B?  Lots of things to think about as we get into the detail.

We’ve been running Primo as our new Library Search discovery system since the end of April so it’s been ‘live’ for just over four months.  Although it’s been a quieter time of year over the summer I thought it would be interesting to start to see what the analytics are saying about how Library Search is being used.

Internal click-throughs
Some analytics are provided by the supplier in the form of click-through statistics and there are some interesting figures that come out of those.  The majority of searches are ‘Basic searches’, some 85%.  Only about 11% of searches use Advanced search.  Advanced search isn’t offered against the Library Search box embedded into the home page of the library website but is offered next to the search box on the results page and on any subsequent search.  It’s probably slightly less than I might have expected as it seemed to be fairly frequently mentioned as being used regularly on our previous search tool.

About 17% of searches lead to users refining their search using the facets.  Refining the search using facets is something we are encouraging users to do, so that’s a figure we might want to see going up.  Interestingly only 13% navigated to the next page in a set of search results using the forward arrow, suggesting that users overwhelmingly expect to see what they want on the first page of results. (I’ve a slight suspicion about this figure as the interface presents links to pages 2-5 as well as the arrow – which goes to pages 6 onwards –  and I wonder if pages 2-5 are taken into account in the click-through figure).

Very few searches (0.5% of searches) led users to use the bX recommendations, despite this being in a prominent place on the page.  The ‘Did you mean’ prompt also seemed to have been used in 1% of searches.  The bookshelf feature ‘add to e-shelf’is used in about 2% of searches.

Web analytics
Browsers used pie chartLooking at web analytics shows that Chrome is the most popular browser, followed by Internet Exploer, Safari, and Firefox.

75% of traffic comes from Windows computers with 15% from Macintoshes.  There’s a similar amount of traffic from tablets to what we see on our main library website, with tablet traffic running at about 6.6% but mobile traffic is a bit lower at just under 4%.

Overall impressions
Devices using library search seem pretty much in line with traffic to other library websites.  There’s less mobile phone use but possibly that is because Primo isn’t particularly well-optimised for mobile devices and also maybe something to test with users whether they are all that interested in searching library discovery systems through mobile phones.

I’m not so surprised that basic search is used much more than advanced search.  It matches the expectations from the student research of a ‘google-like’ simple search box.  The data seems to suggest that users expect to find results that are relevant on page one and not go much further, something again to test with users ‘Are they getting what they want’.  Perhaps I’m not too surprised that the ‘recommender’ suggestions are not being used but it implies that having them at the top of the page might be taking up important space that could be used for something more useful to users.  Some interesting pointers about things to follow up in research and testing with users.

 

At the end of November I was at a different sort of conference to the ones I normally get to attend.  This one, Design4learning was held at the OU in Milton Keynes, but was a more general education conference.  Described as “The Conference aims to advance the understanding and application of blended learning, design4learning and learning analytics ” Design4learning covered topics such as MOOCs, elearning, learning design and learning analytics.

There were a useful series of presentations at the conference and several of them are available from the conference website.   We’d put together a poster for the conference talking about the work we’ve started to do in the library on ‘library analytics’ – entitled ‘Learning Analytics – exploring the value of library data and it was good to talk to a few non-library people about the wealth of data that libraries capture and how that can contribute to the institutional picture of learning analyticPoster for Design4learning conferences.

Our poster covered some of the exploration that we’ve been doing, mainly with online resource usage from our EZProxy logfiles.  In some cases we’ve been able to join that data with demographic and other data from surveys to start to look in a very small way at patterns of online library use.

Design4learning conference poster v3

The poster also highlighted the range of data that libraries capture and the sorts of questions that could be asked and potentially answered.  It also flagged up the leading-edge work by projects such as Huddersfield’s Library Impact Data Project and the work of the Jisc Lamp project.

An interesting conference and an opportunity to talk with a different group of people about the potential of library data.

Photograph of office buildings at Holborn Circus

Holborn Circus – I was struck by the different angles of the buildings

Themes

For me two big themes came to mind after this year’s Future of Technology in Education Conference (FOTE). Firstly, around creativity, innovation and co-creation; and secondly about how fundamental data and analytics is becoming.

Creativity, innovation and co-creation

Several of the speakers talked about innovation and creativity.  Dave Coplin talked of the value of Minecraft and Project Spark and the need to create space for creativity, while Bethany Koby showed us examples of some of the maker kits ‘Technology Will Save Us’ are creating.

Others talked of ‘flipping the classroom’ and learning from students as well as co-creation and it was interesting in the Tech start-up pitchfest that a lot of the ideas were student-created tools, some working in the area of collaborative learning.

Data and analytics

The second big trend for me was about analytics and data.  I was particularly interested to see how many of the tools and apps being pitched at the conference had an underlying layer of analytics.  Evaloop which was working in the area of student feedback, Knodium – a space for student collaboration, Reframed.tv – offering interaction and sharing tools for video content, Unitu – an issues tracking tool and MyCQs – a learning tool, all seemed to make extensive use of data and analytics, while Fluency included teaching analytics skills.  It is interesting to see how many app developers have learnt the lessons of Amazon and Google of the value of the underlying data.

Final thoughts and what didn’t come up at the conference

I didn’t hear the acronymn MOOC at all – slightly surprising as it was certainly a big theme of last year’s conference.  Has the MOOC bubble passed? or is it just embedded into the mainstream of education?  Similarly Learning Analytics (as a specific theme).  Certainly analytics and data was mentioned (as I’ve noted above) but of Learning Analytics – not a mention, maybe it’s embedded into HE practice now?

Final thoughts on FOTE.  A different focus to previous years but still with some really good sessions and the usual parallel social media back-channels full of interesting conversations. Given that most people arrived with at least one mobile device, power sockets to recharge them were in rather short supply.

To Birmingham at the start of last week for the latest Jisc Library Analytics and Metrics Project (http://jisclamp.mimas.ac.uk/) Community Advisory and Planning group meeting.  This was a chance to catchup with both the latest progress and also the latest thinking about how this library analytics and metrics work will develop.

At a time when learning analytics is a hot topic it’s highly relevant to libraries to consider how they might respond to the challenges of learning analytics. [The 2014 Horizon report has learning analytics in the category of one year or less to adoption and describes it as ‘data analysis to inform decisions made on every tier of the education system, leveraging student data to deliver personalized learning, enable adaptive pedagogies and practices, and identify learning issues in time for them to be solved.’

LAMP is looking at library usage data of the sort that libraries collect routinely (loans, gate counts, eresource usage) but combines it with course, demographic and achievement data to allow libraries to start to be able to analyse and identify trends and themes from the data.

LAMP will build a tool to store and analyse data and is already working with some pilot institutions to design and fine-tune the tool.  We got to see some of the work so far and input into some of the wireframes and concepts, as well as hear about some of the plans for the next few months.

The day was also the chance to hear from the developers of a reference management tool called RefMe (www.refme.com).  This referencing tool is aimed at students who often struggle with the typically complex requirements of referencing styles and tools.  To hear about one-click referencing, with thousands of styles and with features to intergrate with MS Word, or to scan in a barcode and reference a book, was really good.  RefMe is available as an iOS or Android app and as a desktop version.  As someone who’s spent a fair amount of time wrestling with the complexities of referencing in projects that have tried to get simple referencing tools in front of students it is really good to see a start-up tackling this area.

I picked up over the weekend via the No Shelf Required blog that EBSCO Discovery usage data is now being added into Plum Analytics.    EBSCO’s press release talks about providing “researchers with a much more comprehensive view of the overall impact of a particular article”.   Plum Analytics have fairly recently been taken over by EBSCO (and here) so it’s not so surprising that they’d be looking at how EBSCO’s data could enhance the metrics available through Plum Analytics.

It’s interesting to see the different uses that activity data in this sphere can be put to.  There are examples of it being used to drive recommendations, such as hot articles, or Automated Contextual Research Assistance. LAMP is talking of using activity data for benchmarking purposes.  So you’re starting to see a clutch of services-being driven by activity data just as the like’s of Amazon drive so much of what appears on their sales site by data driven by customer activity.

Visual.ly google analytics infographic screenshotInfographics and data visualisations seem to be very popular at the moment and for a while I’ve been keeping an eye on visual.ly as they have some great infographics and data visualisations.  One of the good things about the visual.ly infographics is that there is some scope to customise them.  So for example there is one about the ‘Life of a hashtag’ that you can customise and several others around facebook and twitter that you can use.

I picked up on twitter the other week that they had just brought out a Google Analytics infographic.  That immediately got my interest as we make a lot of use of GA.  You just point it to your site through your Google Analytics account and then get a weekly email ‘Your weekly insights’ created dynamically from your Google Analytics data.

It’s a very neat idea and quite a useful promotional tool to give people a quick snapshot of what is going on.  So you get Pageviews over the past three weeks, what the tVisual.ly google analytics infographic screenshotrends are for New and Returning Visitors and reports on Pages per visit and Time on site and how that has changed in the past week.

It’s quite useful for social media traffic showing how facebook and twitter traffic has changed over the past week and as these types of media are things that you often want quite quick feedback on it is a nice visual way of being able to show what difference a particular activity might have had.

Obviously as a free tool, there’s a limit to the customisation you can do.  So it might be nice to have visits or unique visitors to measure change in use of the site, or your top referrals, or particular Visual.ly google analytics infographic screenshotpages that have been used most frequently. The time period is something that possibly makes it less useful for me in that I’m more likely to be want to compare against the previous month (or even this month last year).  But no doubt visual.ly would build a custom version for you if you wanted something particular.

But as a freely available tool it’s a useful thing to have.  The infographic is nicely presented and gives a visually appealing presentation of analytics data that can often be difficult to present to audiences who don’t necessarily understand the intricacies of web analytics.

The Google Analytics Visual.ly infographic is at https://create.visual.ly/graphic/google-analytics/

Encouraged by some thinking about what sort of prototype resource usage tools we want to build to test with users in a forthcoming ‘New tools’ section I’ve been starting to think about what sort of features you could offer to library users to let them take advantage of library data.

Early steps
For a few months we’ve been offering users of our mobile search interface (which just does a search of our EBSCO discovery system) a list of their recently viewed items and their recent searches. The idea behind testing it on a mobile device Mobile search results screenwas that giving people a link to their recent searches or items viewed would make it easier for people to get back to things that they had accessed on their mobile device by just clicking single links rather than having to bookmark them or type in fiddly links. At the moment the tool just lists the resources and searches you’ve done through the mobile interface.

But our next step is to make a similar tool available through our main library website as a prototype of the ‘articles I’ve viewed’. And that’s where we start to wonder about whether the mobile version of the searches/results should be kept separate from the rest of your activities, or whether user expectations would be that, like a Kindle ebook that you can sync across multiple devices, your searches and activity should be consistent across all platforms?

At the moment our desktop version has all your viewed articles, regardless of the platform you used. But users might want to know in future which device they used to access the material maybe? Perhaps because some material isn’t easily accessible through a mobile device. But that opens up another question, in that the mobile version and the desktop version may be different URLs so you might want them to be pulled together as one resource with automatic detection of your device when you go to access the resource. Articles I've read screenshot

Next steps
With the data about what resources are being accessed and what library web pages are being accessed it starts to open up the possibility of some more user-centred use of library activity and analytics data.

So you could conceive of being able to match that there is a spike of users accessing the Athens problems FAQ page and be able to tie that to users trying to access Athens-authenticated resources. Being able to match activity with students being on a particular module could allow you to push automatically some more targeted help material, maybe into the VLE website for relevant modules, as well as flag up an indication of a potential issue to the technical and helpdesk teams.

You could also contemplate mining reading lists and course schedules to predict when there are particular activities that are scheduled and automatically schedule pushing relevant help and support or online tutorials to students. Some of the most interesting areas seem to me to be around building skills and using activity (or lack of activity) to trigger promotion of targeted skills building activities. So knowing that students on module X should be doing an activity that involves looking at this set of resources, and being able to detect the students that haven’t accessed those resources, offering them some specific help material, or even contact from a librarian. Realistically those sorts of interventions simply couldn’t be managed manually and would have to rely on some form of learning analytics-type trigger system.

One of the areas that would be useful to look at would be some form of student dashboard for library engagement. So this could give students some data about what engagement they have had with the library, e.g. resources accessed, library skills completed, library badges gained, library visits, books/ebooks borrowed etc. Maybe set against averages for their course, and perhaps with some metrics about what high-achieving students on their course last time did. Add to that a bookmarking feature, lists of recent searches and resources used, with lists of loans/holds. Finished off with useful library contacts and some suggested activities that might help them with their course based on what is know about the level of library skills needed in the course.

Before you can do some of the more sophisticated learning analytics-type activities I suspect it would be necessary is to have a better understanding of the impact that library activities/skills/resources have on student retention and achievement. And that seems to me to argue for some really detailed work to understand library impact at a ‘pedagogic’ level.

Photograph of documents from ALTCA quick trip to Manchester yesterday to take part in a Symposium at ALT-C  on ‘Big Data and Learning Analytics’ with colleagues from the OU (Simon Buckingham Shum, Rebecca Ferguson, Naomi Jeffrey and Kevin Mayles) and Sheila MacNeill from JISC CETIS (who has blogged about the session here).

It was the first time I’d been to ALT-C and it was just a flying visit on the last morning of the conference so I didn’t get the full ALT-C experience.  But I got the impression of a really big conference, well-organised and with lots of different types of sessions going on.  There were 10 sessions taking place at the time we were on, including talks from invited speakers.  So lots of choice of what to see.

But we had a good attendance at the session and there seemed a good mix of people and a good debate and questions during the symposium.  Trying to both summarise an area like Learning Analytics and also give people an idea of the range of activities that are going on is tricky in a one-hour symposium but hopefully gave enough of an idea of some of the work taking place and some of the issues and concerns that there are.

Cross-over with other areas
Sheila had a slide pointing out the overlaps between the Customer Relationship Management systems world, Business Intelligence and Learning Analytics, and it struck me that there’s also another group in the Activity Data world that crosses over.  Much of the work I mentioned (RISE and Huddersfield’s fanstastic work on Library impact)  came out of JISC’s Activity Data funding stream and some of the synthesis project work has been ‘synthesised’ into a website ‘Exploiting activity data in the academic environment’ http://www.activitydata.org/ Many of the lessons learnt that are listed here, particularly around what you can do with the data, are equally relevant to Learning Analytics.  JISC are also producing an Activity Data report in the near future.

Interesting questions
A lot of the questions in the session were as much around the ethics as the practicality.   Particularly interesting was the idea that there were risks of Learning Analytics in encouraging a view that so much could be boiled down to a set of statistics, which sounded a bit like norms to me. The sense-making element seems to be really key, as with so much data and statistics work.

I’d talked a bit about also being able to use the data to make recommendations, something we had experimented with in the RISE project. It was interesting to hear views about the dangers of them reducing rather than expanding choice by narrowing the choices as people are encouraged to select from a list of recommendations which reinforces the recommendations leading to a loop.  If you are making recommendations based on what people on a course looked at then I’d agree that it is a risk, especially as I think there’s a huge probability that people are often going to be looking at resources that they have to look at for their course anyway.

When it comes to other types of recommendations (such as people looking at this article also viewed this other article, and people searching for this search term look at these items) then there is still some chance of recommendations reinforcing a narrow range of content, but I’d suggest that there is still some chance of serendipitous discovery of material that you might not ordinarily have seen.  I’m aware that we’ve very much scratched the surface with recommendations and used simple algorithms that were designed around the idea that the more people who viewed that pattern the better the recommendation.  But it may be that more complex algorithms that throw in some ‘randomness’ might be useful.

One of the elements I think that is useful about the concept of recommendations is that people largely accept them (and perhaps expect them) as they are ubiquitous in sites like Amazon.  And I wonder if you could almost consider them as a personalisation feature that indicates that your service is modern and up-to-date and is engaging with users.  For many library systems that still look to be old-fashioned and ‘librarian’-orientated then perhaps it is equally important to be seen to have these types of features as standard.

Update: Slides from the introductory presentation are here

Twitter posts

Categories

Calendar

February 2016
M T W T F S S
« Jan    
1234567
891011121314
15161718192021
22232425262728
29  

Creative Commons License

Follow

Get every new post delivered to your Inbox.

Join 50 other followers