Forget submission rates, funding is the issue
by Luke Georghiou
So, it’s all over bar the money. A day spent in an ‘operations room’, first waiting for the download, then the team dissecting the results and letting people know what they got–not so different from our students hanging on their degree results, though the collectiveness of the occasion makes it more like an election night. So, who will be singing "Things can only get better" tonight?
While the funding councils would like to think of the RAE as an absolute measure of quality, the reality is that it is all about relative performance. It could hardly be otherwise when the primary purpose is to drive a funding allocation.
There are also many reputational spillovers as other funders, research students and collaborators see the outcome as a convenient signpost of quality. Foreigners might raise an eyebrow at the claims, which we are likely to see made, that the great majority of UK research is of "international quality". But this is the national Premiership, not the European Champions league, so that does not really matter. What it comes down to are rankings.
Until the funding formula is announced (of which more later), the focus is likely to be on league tables. Since these do not exist officially, it has become the prerogative of the press to fill the gap. Research Fortnight’s online service gets the 4* this time with its extensive analyses and do-it-yourself weightings facility.
An interesting feature has been the positioning and lobbying in the past few weeks as universities try to influence the presentation in ways that they believe will favour them.
Perhaps the most irrelevant argument has been the one about submission rates. Encouraged by some vice-chancellors, some newspapers claim that there has been a plot to prevent them from resurrecting the data available in previous RAEs, on the proportion of research active staff submitted, by using HESA data. Heated language has suggested that this outcome undermines the results because profiles can be massaged to omit staff who are likely to lower the ratings.
To some extent this is true, though a price has to be paid in reduced volume (and a lower position in research power tables). The fact is that when such data were available in 2001, the press and most academics largely ignored them. Furthermore, at the level of an institution, comparisons are so unreliable as to be useless.
Without going into the technicalities, there are at least two reasons why such data would tell you little or nothing. Firstly, large numbers of research-only staff who met the criterion of ‘independent researcher’ were submitted. For this group, inclusion or exclusion is largely discretionary, and could easily obscure the numbers of omitted academics.
Secondly, exclusions are largely concentrated in subjects where a proportion of staff are unlikely to perform research, for example, because they are professional trainers in vocational subjects, or are clinicians. At an institutional level, the proportion of staff submitted is more likely to reflect the mix of units of assessment than major policy differences. I would be surprised if the top 20 institutions had very different submission rates. It would be a pity if some misguided attempt to second guess these data distorts the presentation of results.
That leaves the data on profiles and FTEs, provided by the funding councils, as the basis for the league tables.
The most likely basis for the construction of tables is the Grade Point Average (GPA). When the funding councils chose to give the categories numeric labels, it was an easy jump to assume that these could somehow be read as a scale and arithmetically averaged. This move is no more justified as an assumption than any other weighting would be. The previous history of geometric distributions between grades suggests that GPAs almost certainly will not reflect the eventual distribution of funding, so why use them to construct tables?
The most fundamental divide in the analysis of results is one of scale–the question here is whether it is better to have a high average or a high total amount of quality in a given subject in an institution.
This is the difference between GPAs and research power indicators. Both approaches have their strengths and weaknesses. The difference hinges on the question of critical mass. Does it matter how many researchers of quality are in that subject in a given institution? In theory, the highest GPA could go to an institution with only a handful of researchers that have very little impact in total.
The smart reader of league tables will separate specialist institutions from the rest and consider them in the context only of their own subjects.
Research power tells you that there is a concentration of excellence but, equally, that there may be, in the same place, a concentration of the less excellent. Critical mass is important but it can often lie in the ability to configure teams across different disciplines rather than within a disciplinary block. You pays your money and you takes your choice.
What next? More detailed profiles and some stylised feedback in January, but the really important outcome is the announcement of the distribution of QR funds in March. The RAE is an answer to a question that is finalised after the results are known.
The challenge is to meet the funding needs of cash-strapped institutions and still have enough change to reward success and dynamism, but not so much as to destabilise large parts of the sector. A lot will come down to how many policy goals QR is intended to meet.
To remain at the international forefront, the top institutions need serious funding support. The previous skewed distribution of UK research funding has recognised this need and is largely supported by similar distributions being found in competitive grant funding. However, the clear blue water of research power that exists in the upper parts of the table–first between Oxbridge and the rest, then between Manchester plus UCL and the Russell Group pack, and also for the outstanding specialists, such as LSE is offset by a system which potentially rewards excellence found in small quantities virtually all the way down the table. It will be very difficult to design a formula which meets all needs and demands.
Could we see a proliferation of special objective funds to target some QR at institutions that would otherwise suffer losses surely unintended by the Funding Councils? This would depart from the stated objectives, but it has happened before.
And beyond that? A few months ago, the RAE appeared to be a dinosaur heading for extinction but its intended replacement, the Research Excellence Framework, is proving to be an even less flexible and agile creature whose survival may depend upon it becoming more and more like the exercise it was intended to replace.
Peer review is like democracy–the worst solution except for all the others. So, we may well see the RAE survive under another name. The biggest risk is that squabbles about the relative merits of different assessment systems could prompt politicians to pull the plug on QR–and that would be a tragic ending to the story for UK research.
Luke Georghiou is Professor of Science and Technology Policy and Management in the Manchester Institute of Innovation Research at Manchester Business School.