Star Quality

In SAgE we are about to begin the next internal quality exercise (IQA) to update the REF returnability of academic staff. At my previous institution, similar exercises for the RAE were always tortuous, complex, and caused great angst for a number of staff. Given the rules of REF, it is difficult to see how we can avoid these difficulties. Nonetheless, we must take this exercise very seriously. For those unfamiliar with the process, we are in effect endeavouring to balance two outcomes – the best possible position in the ubiquitous league tables, and the optimum financial return to the institution. The latter is a very measurable quantity known as ‘QR’ income, which is in effect an annual block grant awarded to the institution to support research activity – SAgE received ~£11M of QR income this year, so it is clearly critically important for us.  The former is more difficult to measure, but there is adequate evidence to suggest that league table position impacts on the ability to attract high quality students and to win research grants. Moreover, to know that one’s institution is in, say, the ‘top 10’, is a significant boost to morale.

So why the angst? In these difficult financial times, only the best research gets funded, so it is pointless submitting work that is below par. But the bar is very high – as part of the REF process, each eligible member of academic staff needs to submit four ‘outputs’ over the ‘REF period’, and each of these is given a star rating – 4*, 3*,2* and 1*. Importantly, only 4* and 3* work will attract any funding, and 4* activity is worth three times more than 3*. Thus, as a faculty (and indeed institution) we need to judge whether a piece of work (typically a research paper or peer-reviewed conference proceeding) is 4* or 3* and thus worthy of inclusion. Now here’s the rub – how do we measure quality across such a broad spectrum of activity? In these times of sound bites and limited time for anything, there is a tendency to use quantitative metrics. These might include ‘journal impact factor’ or ‘number of citations’. The danger of using such measures in isolation is so obvious that it need not be stated.

The SAgE Faculty Executive Board has discussed how to measure quality for the next exercise. The consensus view is not particularly imaginitive – rather than using quantitative metrics as the primary tool, we feel that the panel criteria published in January (which were not available for our first IQA)  will be  a much more rigorous guide. For example, summarising some of the criteria for main panel B (where the majority of our work will likely be returned), one can find the following statements  (page 46 of the REF 01.2012 document):

4*: world leading in terms of originality, significance and rigour-

  • research that is leading or at the forefront of the research area
  • great novelty in developing new thinking, new techniques or novel techniques
  • developing new paradigms or fundamental new concepts for research

3*: internationally excellent in terms of originality…falls short of highest standards of excellence-

  • makes important contributions to the field at an international standard
  • contributes important knowledge…..likely to have a lasting influence, but not necessarily leading to fundamental new concepts

2*: recognised internationally in terms of originality, significance and rigour-

  • provides useful knowledge and influences the field
  • involves incremental advances, which might include new knowledge which conforms with existing ideas and paradigms…..

With these definitions, I suggest it should be straighforward to assess star quality in the majority of cases.  At the risk of sounding glib, I would chacterise them as ‘paradigm shifting’, ‘influential’ and ‘derivative’ respectively. However, quantitative metrics might have a secondary role in cases where there might be some doubt. Let me give an example – imagine a journal article that was published in 2008 and that is judged to be 4* – is it thinkable that such an article would have received no citations?

As a final comment, note that the definition of 2* does not give the impression of poor quality work. Indeed, it is recognised internationally.  I suspect we have all regularly published 2* work, and the fact that this does not qualify for REF submission is a sign of the  very high standards to which we must aspire in these challenging times.

Advertisements

About stevehomans

Professor Steve Homans is a structural biologist with an international reputation in the study of biomolecular interactions. He obtained his first degree and DPhil in Biochemistry at Oxford University, and secured his first academic position as Lecturer at the University of Dundee. In 1998 he received the Zeneca award from the Biochemical Society and was elected Fellow of the Royal Society of Edinburgh. Prior to his current appointment he was Dean of the Faculty of Biological Sciences at the University of Leeds. Professor Homans brings extensive expertise of academic leadership and management, with a particular emphasis on organisational change.
This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Star Quality

  1. Pingback: KPIs and Targets (continued) | Steve's Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s