I have recently returned from one of the Faculty open meetings that we have organised to explain our Vision 2021 strategy to staff, and to take questions on same. One question/comment that arose is that much of the strategic thinking seems to be ‘top-down’ rather than ‘bottom-up’. This is an interesting point. In one sense a strategy needs to be top-down to ensure that the organisation moves forward in a coherent manner – this would be difficult with a multitude of locally driven strategies, and NU is too large and complex to have any hope of deriving a strategy from scratch by incorporating every view. Nonetheless, we are on shaky ground if we are unable to explain effectively what our strategy is, why we need it, and what we need to do to deliver it. This is one reason why we organised the series of Faculty open meetings. Moreover, none of us knows all the answers (especially me!), and the faculty management team would welcome your thoughts and ideas. Indeed, I am reminded of an excellent book that I read a while ago by James Surowiecki entitled ‘The Wisdom of Crowds’ – I can highly recommend it, and to provide a precis here would not do it justice.
So, how can everyone help? Interestingly, we are looking at a ‘problem’ associated with the formulation of KPIs and targets to ‘measure’ our research performance. If you are unfamiliar with KPIs I have written about them in previous blogs. Typical measures of research performance widely used in the sector are such as ‘research income per fte’, ‘value of new awards per fte’ or ‘number of postgrads per fte’. These are all fine KPIs which will help measure the buoyancy of our research but, as we discussed at Faculty Executive Board recently, they are input measures. What we also need are output measures since, to take an extreme example, £1M of research income and 5 PDRAs is not a good measure if no publications result, or if any resulting publications are of questionable quality. Herein lies the problem – in order to measure outputs we need to count the number of publications produced (as just one example of outputs), and to assess their quality. We have recently done just this in preparation for REF2014 through our internal ‘IQA2’ process. Those involved in this process (for which very many thanks by the way), will know that it takes a great deal of time and energy. I doubt that anyone would willingly do this annually for KPI purposes.
It will be critically important to measure the state of health of our research endeavours not only in the run-up to REF2014, but also in the critical years immediately past this exercise as we lay the foundations for the next one. So, here’s the challenge – what is the simplest, most effective way to measure our research outputs annually for KPI purposes? Please post your answers here and let’s see ‘crowd intelligence’ in action!