Impact pilot recommends only minor tweaks to HEFCE plans
The pilot exercise to test the Higher Education Funding Council for England's plans to assess economic and social impact in the 2014 Research Excellence Framework has reported back, and on the whole the people who took part—at least those who chaired the evaluation panels—are satisfied that it works.
This will certainly be a relief to HEFCE, who have faced harsh criticism over the past year from academics implacably opposed to impact assessment , though in some cases that ire would have been better directed at the research councils.
The pilot panel chairs did recommed some changes to make the system work better. They want the weighting given to the impact element to be reduced from the planned 25 per cent, at least for the first go-round, until everyone gets used to it. This is a sesible suggestion, which universities and learned societies have also been pushing for, and one that HEFCE is sympathetic to. It will almost certainly be heeded.
The panel chairs also made one suggestion that would make their jobs a bit easier, by shifting some of the burden over to the people assessing the 'research environment' element. They want the overarching "impact statements", which set out a department’s strategic approach to impact and how the institution supports its researchers in achieving impact, to be part of the 'environment' assessment. This makes sense as well; as the report says, "a high quality research environment should underpin high quality research outputs and support impact".
What I find most interesting is the suggestion that only impact arising from high quality research, equivalent to 2* or greater, should be assessed. This will require some way for the universities to show that the submitted impacts are based on good enough research, though the report doesn't say how this should be done, only that "panel members should not be expected to review significant numbers of outputs to assure this".
So if you do some work that really isn't thought of very highly your peers, but proves to be incredibly useful, don't bother submitting it, it won't get a look in. Or, if you don't do any original research at all, but find a new use for something someone else has done, it won't count. This shows that the REF really is about assessing the quality and impact of a university's research, not its researchers. I wonder what David Willetts, with his ideas about absorptive capacity, thinks of this.
There are also some interesting points about how the criteria and scoring of case studies worked, such as how the two criteria of "significance" and "reach" should interact. Interestingly, the panels felt that there should not be a strict geographical hierarchy for reach. So perhaps you could also get high marks for a large temporal reach.
What do you think? Did the pilot outcomes assuage your fears? Are the recommendations on the money, or way off base? Is the REF moving in the right directiom? Let us know in the comments section below.