Exquisite Life Exquisite Life Research Europe Research Fortnight

December 18, 2008

Forget submission rates, funding is the issue

by Luke Georghiou

So, it’s all over bar the money. A day spent in an ‘operations room’, first waiting for the download, then the team dissecting the results and letting people know what they got–not so different from our students hanging on their degree results, though the collectiveness of the occasion makes it more like an election night. So, who will be singing "Things can only get better" tonight?

While the funding councils would like to think of the RAE as an absolute measure of quality, the reality is that it is all about relative performance. It could hardly be otherwise when the primary purpose is to drive a funding allocation.

There are also many reputational spillovers as other funders, research students and collaborators see the outcome as a convenient signpost of quality. Foreigners might raise an eyebrow at the claims, which we are likely to see made, that the great majority of UK research is of "international quality". But this is the national Premiership, not the European Champions league, so that does not really matter. What it comes down to are rankings.

Until the funding formula is announced (of which more later), the focus is likely to be on league tables. Since these do not exist officially, it has become the prerogative of the press to fill the gap. Research Fortnight’s online service gets the 4* this time with its extensive analyses and do-it-yourself weightings facility.

An interesting feature has been the positioning and lobbying in the past few weeks as universities try to influence the presentation in ways that they believe will favour them.

Perhaps the most irrelevant argument has been the one about submission rates. Encouraged by some vice-chancellors, some newspapers claim that there has been a plot to prevent them from resurrecting the data available in previous RAEs, on the proportion of research active staff submitted, by using HESA data. Heated language has suggested that this outcome undermines the results because profiles can be massaged to omit staff who are likely to lower the ratings.

To some extent this is true, though a price has to be paid in reduced volume (and a lower position in research power tables). The fact is that when such data were available in 2001, the press and most academics largely ignored them. Furthermore, at the level of an institution, comparisons are so unreliable as to be useless.

Without going into the technicalities, there are at least two reasons why such data would tell you little or nothing. Firstly, large numbers of research-only staff who met the criterion of ‘independent researcher’ were submitted. For this group, inclusion or exclusion is largely discretionary, and could easily obscure the numbers of omitted academics.

Secondly, exclusions are largely concentrated in subjects where a proportion of staff are unlikely to perform research, for example, because they are professional trainers in vocational subjects, or are clinicians. At an institutional level, the proportion of staff submitted is more likely to reflect the mix of units of assessment than major policy differences. I would be surprised if the top 20 institutions had very different submission rates. It would be a pity if some misguided attempt to second guess these data distorts the presentation of results.

That leaves the data on profiles and FTEs, provided by the funding councils, as the basis for the league tables.

The most likely basis for the construction of tables is the Grade Point Average (GPA). When the funding councils chose to give the categories numeric labels, it was an easy jump to assume that these could somehow be read as a scale and arithmetically averaged. This move is no more justified as an assumption than any other weighting would be. The previous history of geometric distributions between grades suggests that GPAs almost certainly will not reflect the eventual distribution of funding, so why use them to construct tables?

The most fundamental divide in the analysis of results is one of scale–the question here is whether it is better to have a high average or a high total amount of quality in a given subject in an institution.

This is the difference between GPAs and research power indicators. Both approaches have their strengths and weaknesses. The difference hinges on the question of critical mass. Does it matter how many researchers of quality are in that subject in a given institution? In theory, the highest GPA could go to an institution with only a handful of researchers that have very little impact in total.

The smart reader of league tables will separate specialist institutions from the rest and consider them in the context only of their own subjects.

Research power tells you that there is a concentration of excellence but, equally, that there may be, in the same place, a concentration of the less excellent. Critical mass is important but it can often lie in the ability to configure teams across different disciplines rather than within a disciplinary block. You pays your money and you takes your choice.

What next? More detailed profiles and some stylised feedback in January, but the really important outcome is the announcement of the distribution of QR funds in March. The RAE is an answer to a question that is finalised after the results are known.

The challenge is to meet the funding needs of cash-strapped institutions and still have enough change to reward success and dynamism, but not so much as to destabilise large parts of the sector. A lot will come down to how many policy goals QR is intended to meet.

To remain at the international forefront, the top institutions need serious funding support. The previous skewed distribution of UK research funding has recognised this need and is largely supported by similar distributions being found in competitive grant funding. However, the clear blue water of research power that exists in the upper parts of the table–first between Oxbridge and the rest, then between Manchester plus UCL and the Russell Group pack, and also for the outstanding specialists, such as LSE is offset by a system which potentially rewards excellence found in small quantities virtually all the way down the table. It will be very difficult to design a formula which meets all needs and demands.

Could we see a proliferation of special objective funds to target some QR at institutions that would otherwise suffer losses surely unintended by the Funding Councils? This would depart from the stated objectives, but it has happened before.

And beyond that? A few months ago, the RAE appeared to be a dinosaur heading for extinction but its intended replacement, the Research Excellence Framework, is proving to be an even less flexible and agile creature whose survival may depend upon it becoming more and more like the exercise it was intended to replace.

Peer review is like democracy–the worst solution except for all the others. So, we may well see the RAE survive under another name. The biggest risk is that squabbles about the relative merits of different assessment systems could prompt politicians to pull the plug on QR–and that would be a tragic ending to the story for UK research.
Luke Georghiou is Professor of Science and Technology Policy and Management in the Manchester Institute of Innovation Research at Manchester Business School.

October 10, 2007

Another small step in the right direction

by Luke Georghiou

During David Sainsbury’s long tenure as the minister responsible for science and innovation a number of landmark reports were published reviewing the state of science, technology and innovation policy, notably the Innovation Report, the Lambert Report and the Ten-year Investment Framework for Science and Technology. However, there was always a sense that these exercises were carried out in isolation from one another despite the multi-ministry endorsements that they carried.

In this post-ministerial review, Sainsbury has aimed to achieve a synthesis of policy, particularly in the areas where its predecessors overlapped and sometimes muddied the waters. The result contains no surprises, but subtly shifts the emphasis still further towards innovation and knowledge transfer as the justification and driving force behind government support for science. Nonetheless, all stakeholders will be relieved at the headline recommendation that the increase for basic science funding, foreseen in the Ten Year Framework, should continue, along with (regrettably, in diminishing order of probability) more money for the Technology Strategy Board and a call for other government departments to increase their performance and improve the quality of policymaking by investing more in R&D.

This is a report that aims to be well grounded in the latest thinking on innovation policy—a chapter is devoted to describing the UK’s innovation ecosystem and there is extensive analysis of the reasons for apparent lack of R&D intensity, including the high service component in the economy. The issue of innovation in services is much discussed, though no new ideas are offered beyond an endorsement of the current approach in the Department for Innovation, Universities and Skills that was stimulated by NESTA’s Hidden Innovation report [RF 20/06/07 p16].

Sainsbury has been keen to avoid the accusations of ‘linear model’ thinking that were attached to the early part of his ministerial tenure when innovation policy seemed synonymous with the promotion of spin-off companies. That is not to say that this issue has gone away, but it is a mature treatment focused on hi-tech clusters around universities, complex funding arrangements and identifying insufficient proof of concept funds as the main deficiency in the supply of venture capital, with recommendations for regional development agencies to establish these.

Knowledge transfer is another focus, with research councils set to come under still more pressure to improve performance in this area and extend substantially KT partnerships. Sainsbury has struggled to find a way of recognising that different types of universities have different roles to play in KT and, in particular, to find a role for those that are less research-intensive. However, the result is obscured by an unfortunate attempt at a new nomenclature. This assumes that if a university is not “research intensive”, then it is “business-facing” and, presumably, vice versa—an assumption contradicted by the evidence on links with business.

The first group are defined as focusing on “curiosity-driven research, teaching and KT”, and the second on “the equally important economic mission of professional teaching, user-driven research, and problem solving with local and regional companies”. Reference to “regional universities” would have given a clearer signal and perhaps opened the way to more innovative funding models for this kind of work than the adjustments to HEIF funding that are proposed.

A clear winner in this report is the Technology Strategy Board, which, apart from a ringing endorsement of its present activities such as Innovation Platforms, is put forward for a broader leadership role in defragmenting innovation support. This fragmentation is the cumulative result of the large number of micro-initiatives in this domain over the past decade—one must say with ministers as key culprits in their search for positive announcements to make.

The targets for the new joined-up approach are research councils, RDAs and government departments. One area of fragmentation needing early attention will be the effect of the recent division of the late Department of Trade and Industry! An interesting new role is as the repository for information about the competitive strategy of industries—which would be the closest government has come to coordination of a business sector in three decades.

This report brings demand-side innovation policy fully into the mainstream, using public procurement and regulation to stimulate innovation rather than stifle it. As well as rolling these into the Innovation Platforms—already underway—government departments are urged to adopt innovative procurement practices. The challenge is not one of knowing what should be done but rather of doing it.

A more focused proposal is a fundamental restructuring of the UK’s Small Business Research Initiative, which requires government departments to spend 2.5 per cent of their R&D budgets on small businesses. This has tended to disappear in consultancy rather than promoting innovative S&T solutions to policy problems. The TSB once again is proposed as the agent to effect a transformation. A negative suggestion is the exclusion of the social sciences and humanities from qualifying, at a time when other parts of the review clearly recognise the need for interdisciplinary approaches to innovation in a knowledge and service-based economy.

In sum, this a thoughtful and comprehensive document but, not surprisingly from the man responsible for the present set of policies and institutions, the movement forward is incremental rather than radical. The steady rise of innovation and KT as drivers was already there, but the research community has yet to absorb its full implications.

Luke Georghiou

Luke Georghiou

Luke Georghiou is professor of science and technology policy and management at the University of Manchester