Please note: this older content has been archived and is no longer fully linked into the site. Please go to the current home page for up-to-date content.

The other week I was at a gathering of developers, librarians and researchers with an interest in institutional data repositories. Amongst other things, we spent some time brainstorming the requirements for such a repository, but there was one minor-sounding one that caught my imagination.

It boiled down to this question: given only the DOI for a published article (or other artefact), how do you find an open access copy archived in an institutional repository? Some (rather cursory) Googling didn’t come up with an obvious solution, so I thought “How hard can it be to implement?”.

All that’s required is a database mapping DOIs onto URLs, and a spot of glue to make it accessible over the web. The data that you need is freely available in machine-readable format from most repositories via OAI-PMH, so you can fill up the database using that as a data source.

So, without further ado here it is:

A few caveats:

  1. I don’t get much chance to write code at work at the moment, so this was an opportunity to exercise under-used brain muscles and learn some new stuff. It could probably be done better (and the source code is on github, so feel free to fork it and add improvements). It’s written in Ruby using the awesome Sinatra web framework.
  2. It’s currently hosted on Heroku’s free starter-level service, so there’s very little capacity. It therefore only includes DOI’s from the University of Bath’s Opus repository, and the database is full.

Go try it out and let me know what you think. If it’s useful, I’ll look into how I can make it more robust and resolve more DOIs.


It’s taken a while for me to realise it, but I’m a bit of a list-maker. Some years ago I read David Allen’s Getting Things Done (often abbreviated as GTD) and found some useful tips that have had a big impact in how I manage my tasks and my time.

There are heaps of apps to help you Get your Things Done, but I generally seem to oscillate between two: Omni Group’s OmniFocus and Cultured Code’s Things. The choice between the two is closely balanced in my head, and I seem to end up switching every 12-18 months. Until recently, Things’ lightning-fast cloud sync had be, but now OmniFocus has tempted me back with its general feature-richness.

Some key factors for me:

  • One system for everything:
    • One system that syncs across computers and mobile devices, so I always have it with me;
    • One system for work stuff and personal stuff, because sometimes I need phone my bank while at work and sometimes the solution to a work problem comes to me while watching TV;
  • Multiple ways of structuring and viewing tasks:
    • When I need to check that I’ve captured all my tasks, I need to view them by project to see what’s missing;
    • When I need to actually do things, I need to see my tasks by context, i.e. what equipment/location is required to do them.

Aside: switching is not inefficient

You might think that it’s a waste of time laboriously transferring all my projects and tasks from one system to another, but it’s really not. This only happens once every 12-18 months, and it’s a great way to do a full audit of everything I want to achieve, spot what’s missing and cull the dead wood.

Even if you have one task management system that works for you, I suggest you try occasionally printing the whole lot off (on real dead trees) and re-entering the important stuff. Because it takes more effort, it makes you more ruthless in what stuff you allow onto your todo list and sharpens your focus on what’s important.

OmniFocus vs. Things

OmniFocus’ strength is it’s flexibility. Each task has not only a title and a checkbox, but a project, a context, a start date, a due date, an expected amount of effort and, if that’s not enough, a freeform note field. It has a rich, hierarchical structure for projects and tasks, and the ability to create customised views of the system or “perspectives”.

Things, on the other hand, strives for simplicity. It lacks much of the complexity of OmniFocus and replaces it with tags. Tags can be hierarchical, which is handy, and because you can assign more than one to a task, you can actually use them to replicate a number of OmniFocus’ detail fields.

Things is pretty good…

That simplicity means that there’s very little effort involved in using Things — just throw in your tasks and get started. You can assign one or more tags to each task and then filter on those, and that allows you replicate quite a lot of what OmniFocus allows.

The other area where Things beats OmniFocus is in synchronisation. Every time you make a change in Things it’s synced up to the cloud, and updating another app takes moments. There’s no need to manually initiate a sync, so everything is always available everywhere.

…but OmniFocus is winning

Sooner or later, though, the lack of expressiveness in Things gets to me. OmniFocus panders to my desire for structure: I can have tasks from any project (or any part of a project) appear one at a time or all at once. That all takes a little more time to set up (though it soon becomes second nature), but it means when I actually want to get on with work I see only the tasks I need to see and no more.

OmniFocus’ perspectives are another example of where the extra power is useful. It’s trivial to set up one-click views that only show a certain set of projects (such as work stuff) or a particular set of tasks (such as things I can do offline), or even just group tasks differently (such as by due date or age).

Finally, the iPad app for OmniFocus has a killer feature: Review mode. This makes it trivial for me to sit down at the end of each week with a cup of tea and go through the entire system, finishing off loose ends and capturing next actions. This is central to the GTD way, and is the part of my routine that guarantees everything is in order and nothing gets missed.

Of course there are plenty of situations where you don’t need all of this complexity, and that’s fine too. It doesn’t force you into using all of the features to have a functioning system: you only have to use what you need for the current situation.

What about you?

So there you have it. I’d be interested in finding out how you use OmniFocus or Things, or if you have your own preferred system. There are even people who implement GTD using a biro, a binder clip and a stack of 6x4” index cards.


I quite often favourite tweets that I want to come back and refer to. Unfortunately, I rarely actually get round to going back over my favourite tweets, so I wanted a way to get them into an inbox that I check regularly (á la Getting Things Done).

I finally got round to figuring this out the other day, so here’s my recipe:

  1. You can get an RSS feed of your favourites using a URL of the form, though obviously you should replace “jezcope” with your own Twitter handle.
  2. Once you’ve checked that’s working, copy it and feed it to a daily email digest generator. I’m currently trying blogtrottr which seems to be working well and gives you the option of checking at a range of frequencies from 1 to 24 hours.

That’s it — pretty simple huh? You’ll probably get an email containing all of your favourites to start, and then future emails will contain just the latest favourites.


Here are a few links to things that I mentioned (and maybe a few that I didn’t) in today’s briefing session for University of Bath researchers. Please feel free to leave your own suggestions, or links to your own blog, in the comments at the bottom.

Reading blogs

Once you start following more than two or three blogs, you might find it easier to use a piece of software called a “feed reader” or “news aggregator” (or other similar terms) to do the hard work and collect all the new posts in one place. Here are a few options:

  • Google Reader — web based discontinued — see below
  • FeedDemon — Windows (optional sync with Google Reader)
  • Reeder — Mac, iOS (Google Reader account required)
  • Feedly — Browser plugin, iOS, Android (Google Reader account required)
  • All major web browsers now have some sort of feed reader built in too

Technorati and Google Blog Search are good ways to find something to read.

Update (2 April 2014)

As a few people are coming back here after today’s social media conversation, I’d better mention that Google Reader closed its doors on 1 July 2013, but the other options mentioned above are still good. A couple of other, relatively new, options are:

Ways to blog


The simplest way to start a blog is to use a service (free or paid-for) which handles everything for you. Here are some examples:


If you’re a bit more technical and/or demanding, you may prefer to host your own blog on a server. Here are some examples of software that will help:

Other tips


Finally, the moment you’ve all been waiting for: day 3 of ALT-C 2012!

First up, Professor Mark Stubbs (Head of Learning and Research Technologies at Manchester Metropolitan University) gave an interesting talk on the MMU curriculum redesign. This isn’t my primary interest, but there were some useful nuggets in there about change management. The key lessons they learned from a complete redesign of the undergraduate curriculum in a very short time were:

  1. Engage people; and
  2. Keep it simple.

I particularly liked how they revamped the forms for approving new modules to keep them short, focused and aligned with the desired outcomes of the project (rather than gathering huge amounts of spurious info and getting loads of irrelevant people to sign off). This approach has important lessons for us at Bath as we introduce Data Management Planning to our researchers.

Next up was JISC Head of Innovation Sarah Porter, talking about the ongoing reshaping of JISC in the wake of the HEFCE review.

My second session of the day was James Clay’s “Pilot mentality” symposium. This was based on James’s observation that although “pilot” usually implies something that will be tried out then reported on and scaled up, there seem to be a lot of so-called “pilots” which end up being one-offs. More worryingly, we see the same “pilots” being run across the sector.

I actually ended up writing a whole lot about this session here originally, without feeling like I’d done the topic justice, so I’ve scooped all of that out into its own post, to appear in the near future.

So, onto the final session of the conference, entitled “TEL1 Research: Who needs it?” from the London Knowledge Lab’s Richard Noss. My reaction to this was mixed, I have to say, but overall there some good points.

80 years after the invention of the printing press, it was still only being used to print the bible, and we’ve been using computers in education for fewer than 50 years, so I agree that we probably don’t have a clue what ed. tech. will eventually end up looking like. We’re very good at using new technology to reproduce existing practices and processes, but it takes a while to realise its true potential.

He also wheeled out the old argument that you have to understand how a technology works to use it effectively. Agreed, his examples of senior managers in investment banks failing to understand basic statistics is compelling, but I don’t think it’s fully generalisable. After all, people have been making pretty good bread and cheese for centuries without understanding microbiology.

Understanding a technology means we can be more effective (and more subtle) about its use, but I don’t think complete understanding is a requirement for some level of effectiveness: part of being human is being very good at getting by.

I did like his comments about studying extremes of human behaviour to learn about the norm: I find in my work, sometimes, that I’m drawn to techies and luddites!

Anyway, it was quite a thought provoking conference again, the more so because I’m more focused on research technology at the moment and attending helped me cross-fertilise a bit. I’m not sure if I’ll be going again next year: Digital Research is looking very interesting and tends to clash, so we’ll see.

  1. For those not involved in this area, TEL is the acronym for technology-enhanced learning.


It’s October, which means the autumn TV season has started, which means that Strictly Come Dancing is back on for another year, which means it’s time for a flurry of blog posts as I leave my wonderful other half to shout at the TV on weekend evenings.

I’ve decided to have another go at joining in with another MOOC to give me some blog fuel, and this time round it’s Current & Future State of Higher Education 2012.

My last MOOC attempt, IOE12, sort of fizzled out (my participation, not the course itself) as I didn’t really have the time to keep it going. Hopefully I’ll do better this time, but if not I’m sure I’ll learn something anyway.

So, hello fellow MOOCers and watch this space!


It’s been a little while since ALT-C 2012 now, so I thought I’d better write up the rest of my notes. Here’s day 2 in all it’s glory.

My day started off with James Clay’s workshop entitled “A few of my favourite things” — just an opportunity for gadget lovers to share some of their favourite apps (mostly iPad/iPhone, but a few Androids in there too).

There were a lot of popular apps in there, like the ever-present Evernote and Instagram, but there were a few interesting ones I hadn’t come across, or was able to see in a new light:

Lets you take a photo of a page and semi-automatically straightens it and enhances it so you get a flat, high-contrast version — a scanner in your pocket. Looks like this is abandonware, but instead I discovered Genius Scan, which has many more features.
One for lovers of traditional music: search for info on and dots for a traditional tune by playing a bit of it into your phone.

Next followed an interesting session introducing some tools from projects on the JISC Digital Literacies programme. I particularly liked the digital literacies lens on the SCONUL Seven Pillars of Information Literacy. There’s a lot of (perhaps true but not very helpful) talk going round at the moment about “everyone having a different definition of digital literacy”, so it’s good to see a fairly concise high-level view of what we’re actually talking about on that subject.

As a recovering mathematician, I found Natasa Milic-Frayling’s keynote on network analysis fascinating. Her team at Microsoft Research have developed an Excel plugin, NodeXL for analysing networks (and obtaining data from social networks to analyse).

She described some interesting work analysing voting patterns of US senators, and correlating connections in social networks with geographic distribution.

Students introduced to NodeXL were able to get straight into playing with network data, and quickly took on board the basic concepts (various ideas of the importance of a network node) without needing to grasp the underlying maths (such as the various equations for centrality).

My last session of the day was from Clive Young of University College London, talking about “blended” roles in e-learning. These are typically those people who provide general admin support to lecturers, and are increasingly being expected to managed VLE modules and other online elements of courses on behalf of the lecturers.

At UCL, these teaching administrators with blended roles had self-organised into a support network, as they were getting no targeted support on how to use Moodle from the e-learning team. This was, of course, rectified, and in the end 10% of the staff identified in blended roles went on to achieve CMALT status.

All interesting stuff, and I’ll be back to post my thoughts on day 3 soon.


So today was day 1 of ALT-C 2012. Here are a few thoughts from the day.

The conference kicked off with an inspiring keynote from Eric Mazur. Eric is a physicist at Harvard, and when he’s not doing photonics research, he brings the scientific method to bear on his teaching practice.

He gave three examples that were interesting in their own right, but the key takeaway message was this: data is essential to improving teaching practice. Rather than coming up with anecdotes that go “well, my students seem to like it when I blah blah blah”, why not set up a simple experiment to actually test what helps those students learn.

After lunch, Cathy and I did a workshop on using research data for teaching, as part of the Research360 project. I won’t go into too much detail (it did what it said on the tin), other than to say that I felt like it went pretty well — all the attendees got into the exercises and some really productive discussions took place.

Take a look at the session page to see the slides and exercises.

After that, I saw a couple of demonstrations of some cool stuff (NoobLab, curatr), and caught up with a few of the JISC digital literacy projects.

So far, then, another interesting conference. The catering’s been pretty good too. A lot of carbohydrate, though: lunch was served with cous cous, chips and boiled potatoes (and bread rolls if you wanted) and dinner was equally carbalicious. Perhaps it’ll help me run faster in the morning.

It’s late. I’m wittering. Bye for now!


On Wednesday 22 August 2012, I gave an invited presentation at the August meeting of Oxford Open Science, hosted at the Oxford e-Research Centre. The theme of the evening was “How do we prepare postgraduate research student for the era of big data?”

There were some interesting presentations around that subject:

  • Juliet Ralph and Oliver Bridle from the Bodleian discussed information seeking behaviour amongst students;
  • Open Knowledge Foundation’s Laura Newman told us about the School of Data, a project to produce learning resources for those working with data;
  • Anna Collins from DSpace Cambridge talked about “long tail in the shadow of big data”.

My own presentation discussed some of the work I’ve done providing social media and data management training for PGRs, and the slides can be viewed here:

As an experiment, the LaTeX source of the slides is also available on github. Let me know if they’re any use.