Posts Tagged ‘online resources’
Someone asked me the other day whether it was possible to measure the impact of online subscription resources on levels of achievement, and I must admit that I stumbled a bit in trying to reply. I have methods of measuring the success (or otherwise) of marketing strategies based on increased usage, but how can we prove whether online collections such as Infotrac, the EBSCO resources, Issues Online etc… are helping our pupils achieve and maybe even increasing their grades?
My first thought is why do we assess online resources in a different manner to print resources? They are all effectively the same thing and aimed at the same task, the only discernable difference is that the online resources can seem to be “hidden” away, invisible, non-existant almost and have a different access method.
In libraries it is possible to identify “top readers”; i.e. those that have a high borrowing rate and are a regular in the library. In a lot of cases, this will be the same studious pupil who will get excellent grades and be up there with the highest achievers. Knowing how much they have read whilst in school leads to the conclusion that it must have contributed to their achievement, but the degree of which cannot be accurately measured. Now, how can we find our “top readers” of our online resources? Assessing impact in this manner is tricky and time-consuming unless an authentication service such as Athens or Shibboleth is utilised. Both of which can provide usage statistics per individual. There is no way of identifying the individual using IP authentication or Referring URL and therefore other techniques must be adopted.
But what else is there? If using Referring URL authentication, the link to the resource must be embedded elsewhere; behind another form of identification provided by the school. You could also do this for those resources which are IP Authenticated. So, although I haven’t yet tried it, it should be possible to at least identify which pupils regularly visit the page containing the links to your online resources and, from this, identify your top online readers. A bit convoluted I know, but it should still be possible.
I’m sure there are many other techniques out there for measuring the impact of subscription resources on levels of achievement. Pupil questionnaires, focus groups, observations, teacher feedback all spring to mind. Any other suggestions?
Back in October I blogged about how we are using QR codes to encourage the use of our online subscription resources and promised that I would blog about other strategies that we are using to promote these resources. Well, as you’ve probably noticed; I haven’t! That’s mainly because, up until now, promoting these resources has been low on my list of priorities. Silly really considering how much they cost, but it’s easy just to hope that they’ll take off once they have been added to the school intranet and promoted via a single email to teaching staff. That does work to some degree as the statistics I get from the publishers show, but they are not really embedded across the school and there’s a lot more to do.
So, as you may already be aware from a previous post, we have iPads available in our libraries and they are proving extremely popular. We’ve bought a considerable number of apps for them for all subject areas of the school. These apps are all arranged in folders within subject areas and this therefore made me think. The majority of our online resources can also be categorised in this manner, so why not add them alongside the apps on the iPads?
Therefore this half-term our ICT Support has done just that – added separate links to each one of our online subscription resources onto the iPads. All I’ll need to do now is test them to see whether they work (remember, anything Flash-based won’t work), and monitor the usage statistics to see whether they increase.
So how will I measure the success of this strategy? At the end of the academic year I’ll plot a graph of the usage statistics per resource and mark today’s date (along with other key dates e.g. INSET) to see whether there is a noticable difference in the usage. Obviously this isn’t a scientific method and other factors will be involved, but because we have no other way of monitoring it, this seems the best method. Let’s see what happens.