As we approach the REF2013 census date, it’s enlighteneing to look at how other european countries are approaching the problem of ‘measuring’ the excellence of their research base. The UK has for many years relied on the dual-funding system, where a proportion of the funds that we spend on research derive from the so-called ‘QR’ stream. In effect this is a block grant system, where HEFCE allocates public funds according to the number and quality of staff returned in the REF exercise. These funds are awarded to universities annually, typically using the same or a similar funding model each year, until the next research exercise. This is why universities are so concerned about the outcome of the REF – the block funding you receive will be apportioned according to your overall REF score for a number of years following the REF, which amounts to a considerable sum of money. The second public income stream is of course the money that is distributed to the research councils, which is awarded by funding panels (comprised of peer-group academics) in response to research grant applications. Alas, success rates are not very high – typically 20% or so.
I was interested to read how Germany proposes to assess research quality. As noted in the THE this week, the ‘Forschungsrating’ will not be tied to funding at all. Apparently the German Council of Science and Humanities did not believe it sensible to identify institutions that historically have been relatively strong and to put the lion’s share of money in those. On the face of it, this sounds like the ‘funding excellence wherever it is found’ mantra prior to RAE2008. The view of the German Council is that research is very much a question of luck and diversity, so funding on an historic basis does not make sense to them.
Instead, the expectation is that university leaders will use the ratings to better assess the standing of research in each area and to develop strategies accordingly, without the ‘pain’ or ‘gain’ of of REF. Interestingly, the parameters used to assess univesrities and institutes included research quality, promotion of young researchers and transfer of research into society. The latter is presumably akin to the ‘impact’ scorings that we will receive post REF2013, although these are seen by Germany as leading to uncertaintly and increased workloads. The German system will develop detailed criteria for the activities that are relevant in specific subject areas.
Would this system work better for the UK? – maybe. Interestingly, however, the League of European Research Universities apparently views the UK’s success in increasing research quality through RAE/REF with envious eyes. Many countries, it seems, would like to move to a system similar to ours, but they aren’t there yet. However, as one who bears the scars of all seven RAE/REF exercises (including many hundreds of hours of meetings), I can’t help thinking that the bureacratic load and thus cost of REF is becoming out of proportion to its original aims. What would be wrong, for example, with passing all funding to the Research Councils, and distributing it using the conventional peer review process. Sure, it would make planning more difficult because we would not be guaranteed a block grant every year, but the turmoil of the last couple of years has hardly been helpful in that regard. Nonetheless, we have managed to adapt and survive.