Wisdom of Crowds

I have recently returned from one of the Faculty open meetings that we have organised to explain our Vision 2021 strategy to staff, and to take questions on same. One question/comment that arose is that much of the strategic thinking seems to be ‘top-down’ rather than ‘bottom-up’. This is an interesting point. In one sense a strategy needs to be top-down to ensure that the organisation moves forward in a coherent manner – this would be difficult with a multitude of locally driven strategies, and NU is too large and complex to have any hope of deriving a strategy from scratch by incorporating every view. Nonetheless, we are on shaky ground if we are unable to explain effectively what our strategy is, why we need it, and what we need to do to deliver it. This is one reason why we organised the series of Faculty open meetings. Moreover, none of us knows all the answers (especially me!), and the faculty management team would welcome your thoughts and ideas. Indeed, I am reminded of an excellent book that I read a while ago by James Surowiecki entitled ‘The Wisdom of Crowds’ – I can highly recommend it, and to provide a precis here would not do it justice.

So, how can everyone help? Interestingly, we are looking at a ‘problem’ associated with the formulation of KPIs and targets to ‘measure’ our research performance. If you are unfamiliar with KPIs I have written about them in previous blogs. Typical measures of research performance widely used in the sector are such as ‘research income per fte’, ‘value of new awards per fte’ or ‘number of postgrads per fte’. These are all fine KPIs which will help measure the buoyancy of our research but, as we discussed at Faculty Executive Board recently, they are input measures. What we also need are output measures since, to take an extreme example, £1M of research income and 5 PDRAs is not a good measure if no publications result, or if any resulting publications are of questionable quality. Herein lies the problem – in order to measure outputs we need to count the number of publications produced (as just one example of outputs), and to assess their quality. We have recently done just this in preparation for REF2014 through our internal ‘IQA2’ process. Those involved in this process (for which very many thanks by the way), will know that it takes a great deal of time and energy. I doubt that anyone would willingly do this annually for KPI purposes.

It will be critically important to measure the state of health of our research endeavours not only in the run-up to REF2014, but also in the critical years immediately past this exercise as we lay the foundations for the next one. So, here’s the challenge – what is the simplest, most effective way to measure our research outputs annually for KPI purposes? Please post your answers here and let’s see ‘crowd intelligence’ in action!

Advertisements

About stevehomans

Professor Steve Homans is a structural biologist with an international reputation in the study of biomolecular interactions. He obtained his first degree and DPhil in Biochemistry at Oxford University, and secured his first academic position as Lecturer at the University of Dundee. In 1998 he received the Zeneca award from the Biochemical Society and was elected Fellow of the Royal Society of Edinburgh. Prior to his current appointment he was Dean of the Faculty of Biological Sciences at the University of Leeds. Professor Homans brings extensive expertise of academic leadership and management, with a particular emphasis on organisational change.
This entry was posted in Uncategorized. Bookmark the permalink.

5 Responses to Wisdom of Crowds

  1. For publications then why not just start with the Australian Research Council’s ranking list from their 2010 ERA? I point my students at it to give them a sense of the “standing” of various computer science publication outlets. We know it’s not perfect, but it’s not bad, and averaged over all the members of a school and the hundreds (or even thousands) of publications produced by a School annually it will give a good year-on-year impression of publication output numbers and quality. That would not be some big effort to produce once a year. If people are not happy with the ARC rankings of particular publications then individual rankings that people disagree with can be discussed and agreed within a school and the faculty. But the ARC ERA rankings is a comprehensive starting point (settled on as the result of discussions by many academics). Citations work better (and really mean something) of course, but the latency makes them no use for a KPI.

  2. Bryn Jones says:

    For PGRs, measure the number of doctoral students who qualify promptly per staff FTE rather than numbers recruited or registered.

    • …maybe also (or even preferably) the quality the venues that PhD students are publishing in. Qualifying promptly is one thing (can’t remember how that effects us – maybe you could clarify?), but I think where and how much PhDs are publishing is an outcome measure that is more closely related to what we are trying to achieve – that they are producing knowledge of value to the community and disseminating it. Having a good set of publications on completion is also more valuable to the PhD graduate who is looking to move into industrial or academic research.

  3. Jon Warwick says:

    One measurable output from research funding could be further research funding. Individuals / Groups / Schools who are continually successful are demonstrating the quality of their publications etc at proposal stage. If success is breeding success in funding then its also a measure of positive external evaluation of outputs

  4. Nick Polunin says:

    I consider citation-based indices such as H’ to be a valuable verifiable measure of the ‘usefulness’ of a particular published work. But like all such metrics, this needs to be used with care. For example, some disciplines presumably tend to get more hits than others, thus comparisons should not be blandly made across disciplines. There is also the above-stated ‘latency issue’ but the Web of Knowledge and others now have a range of metrics, which I believe could be applied to individual papers. At the same time, I am somewhat wary of ranking journals by Impact Factor, which I understand to be the basis used by bodies like the Australian ARC, because this is an artificial market (everyone spending huge amounts of time trying to get into the same small number of journals), and let’s face it the esteem in which many (though certainly not all) of the papers published in the ‘discovery’ journals is not shall we say automatically high. At least in theory, it should matter less and less in future where you publish, because with almost everything online, it should be relatively easy to track down any paper.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s