It’s not that long since I became a dedicated citizen of cyberspace. A bit like an apathetic voter, I’d dabbled for years because I thought I should, but I was always wary, partly due to privacy concerns, partly because I’m shockingly busy with work and family, and partly because I had no time for the drivel that seemed to occupy 99.99% of the interweb.

That’s all changed. Facebook and twitter are increasingly essential tools for an academic who wants to have any kind of impact on the outside world (I understand, even sympathise with, those who don’t – they often have very good, public reasons for being introspective). Then comes one’s departmental profile, a profile on academia.edu, a personal blog and website. All of these have to kept updated regularly for them to be of any use, and so just a basic online presence can suck up an enormous amount of time.

But now, I’m discovering there are online profiles of me that I knew nothing about. Not only that, but their automatic data gathering basis means they are sometimes wildly inaccurate, sometimes easily manipulable, and the time spent monitoring, massaging and correcting those could spin out of control.

I came across my Google Scholar profile completely by accident after clicking on a link that took me to the profile page of another John Parkinson, a psychologist at Bangor University. That wasn’t such a surprise – I use Google Scholar sometimes (although my heart will always lie with the Web of Knowledge, with fond memories of trawling through the original, heavy print volumes of the Social Sciences Citation Index). It is fairly easy to update and control what is public and what is not.

What was a bigger and less pleasant surprise was Microsoft’s effort, a new ‘service’ still in its Beta phase called Microsoft Academic Search. I’ll put to one side the monstrous inaccuracies on that site – no, I am not at the University of York any more and I do not publish on the inner workings of the amygdala. What’s more troubling is its default view which lists academics ranked according to their ‘h-index’, a highly controversial and easily-manipulable measure of how widely cited you are. It is controversial because authorship and citation practices vary widely between disciplines, yet its single measure gives promotions committees the illusion of being able to compare apples with oranges, among other reasons. It is manipulable because it encourages academics to form ‘citation circles’, artificially bumping up their citations in order to make themselves look better.

It’s a classic case of an evaluation technique that sounds good in principle, but which only works so long as those who are being evaluated do not know that it’s being used. As soon as a group learns they are being measured – and academics are an inquisitive bunch – they will try to ‘game’ the system, which basically means that they start doing all these artificial activities designed to boost their scores instead of concentrating on doing what they were supposed to be doing. In psychology there is a similar idea called the Hawthorne Effect.

So, not only do I have to keep up with all my own profiles, I now have to keep up with automatically-generated ones; and what’s worse, I will be increasingly expected to keep tabs on my fellow academics’ profiles too, keeping up with Jones et al.

I hope I’ll find some time to think, write and teach in amongst all the updating.

But maybe I won’t correct the Microsoft profile – those amygdala papers garner an awful lot of citations…

Further reading

For a comparison of the three main citation tracking services, four scholars at Yale Medical School published this 2006 paper, available online via open access.

For some stories on the joys of the h-Index, check out:

On evaluation, the Hawthorne Effect and ‘gaming’, try:

Advertisements