Code of practice needed to prevent degree-course mis-selling
The National Student Survey does not provide the valid, reliable data needed to compare higher education institutions. Such differences as it reveals are statistically and practically insignificant. Yet the media use the data to compile league tables of best-performing higher education institutions, and the tables in turn are being misused by institutions that should know better, says John Holmwood.
The Browne Review recommended a withdrawal of public funding and its replacement by differential fees paid by student ‘consumers’. The overall direction of higher education would be determined by their choices in a market for higher education. A modified version of this approach has been adopted by the Government in their proposed reforms, albeit with a cap on the fees that can be charged—a cap that some universities are already anticipating will be removed in the future. As the Browne Review emphasised, the functioning of a market in higher education depends on the information available to inform choices.[i]
A number of reports commissioned by the Higher Education Funding Council for England has identified the information that students would find most useful.[ii] This includes, in order of priority: the proportion of students at an institution that are satisfied with their course, employment outcomes (salary was less important, although it may be presumed that will be different under the new fee regime and repayment proposals), professional recognition of the course, as well as information about such other costs as halls of residence, courses, contact hours, and so on.
Prospective students, their parents and prospective employers may regard this information as useful, but they do not necessarily then seek it out. If they do, they glean it from newspaper league tables such as those provided by the Times Higher Education and the Guardian, and from institutions.
There is concern that information from institutions is presented partially and ‘spun’ to local advantage. HEFCE is currently consulting about a Key Information Set of about 16 items. To overcome the problem of misrepresentation or partial representation, it proposes that the KIS be provided on course websites, though universities should be allowed to provide ‘contextual’ information alongside.
Clearly, certain information, such as cost of halls of residence, information about contact hours, seminar sizes, and bursaries should be straightforward—though even here there are serious problems.
However, information about student satisfaction with different aspects of their programmes of study and their universities is much more problematic, as is information about employment and incomes.
The obvious questions must be: Is the information readily available? And is the information HEFCE intends to use valid and reliable? Given that students are seeking to make comparisons across courses and institutions, any data must be able to bear the weight of comparison. This is fundamental both to the efficacy of the market reforms the Government is seeking to introduce and to the very standards of evidence and argument that universities seek to embody.
Most of the information on student satisfaction has been gathered through the National Student Survey. As HEFCE allows, the gathering of income data about graduates from specific courses has not begun, or been tested. Although it is much more difficult to gather data from graduates than it is from third year undergraduates —as is done with NSS, most of the issues with the validity and reliability of the data when used to make comparisons will be the same.
Evaluations of the NSS are unequivocal. According to the Report for HEFCE on Enhancing and Developing the Student Survey, “The design of the NSS means that there are limitations on its use for comparative purposes ... In particular, its validity in comparing results from different subject areas is very restricted, as is its use in drawing conclusions about different aspects of the student experience. One issue to be borne in mind is that, in most cases, the differences between whole institutions are so small as to be statistically and practically insignificant” —Enhancing and Developing the National Student Survey: Executive Summary, point 7).
Cheng and Marsh reach a similar conclusion, “at the university level, there are relatively few universities that differ significantly from the mean across all universities and, at the course level, there is even a smaller portion of differences that are statistically significant. This suggests the inappropriateness of these ratings for the construction of league tables” —page 708).[iii] In other words, differences between students' mean rating of courses are in most cases smaller than differences that would arise from random variation in individual students assessment of the same course, given the number of students assessing each course.
The Report for HEFCE exposes a paradox that it doesn’t properly address. Initially, the NSS was expected to have little impact on quality enhancement within universities. This, it was believed, was better addressed by internal instruments. However, the inappropriate use of the NSS to provide league tables has given rise to internal measures to improve scores and league table position —notwithstanding the fact that position within the league table does not reflect those measures, and the stability of the scores across years indicates that they are unlikely to improve rank order position.
The National Union of Students strongly supports the NSS and there are good reasons for it to do so. It is likely that universities have paid more attention to teaching as a consequence of its introduction. With students paying increased fees, this will be of increased function. However, it does not provide information to guide provide student choices. The Report to HEFCE on Information Needs of Users, found that it was students who were considering applications to the most selective institutions who were most active in seeking out information. Since these are the universities likely to seek to charge premium fees, it is deeply worrying that they should be among those that most misuse the NSS.
For example, the Russell Group announces on its website that, “Russell Group universities have once again provided their students with an overwhelmingly positive experience, with an average of 86 per cent ‘definitely’ or ‘mostly’ agreeing that they were satisfied with the quality of their course – significantly higher than the sector average of 82 per cent”.[iv] The degree of satisfaction is not significantly higher for Russell Group members, neither in the statistical sense, nor in standard usage. The main conclusion from the NSS is that nearly all British universities provide a positive experience for students with very little differences among them.
Any market requires the regulation of ‘mis-selling’. To its credit, HEFCE seems aware of the issue, although it treads gingerly and is ‘toothless’ in its recommendation, that, “comparisons can be made —with appropriate vigilance between responses about the same subject area in different institutions, but it is not valid to compare different subject areas within institutions or to construct league tables of institutions.” It offers no suggestions how appropriate vigilance is to be maintained and leaves it to individual institutions to provide their own contextual information, which current experience suggests is frequently in breach of appropriate standards.
It is a clear public interest that there be proper standards in the presentation of information to prospective students. The changes to higher education funding are of such far-reaching importance that the presentation of information should be subject to scrutiny by the UK Statistics Authority. A first step might be for Universities UK —and the separate University Mission Groups, such as Russell Group, 1994 Group, and Million+ to agree a Code of Practice among its members not to use statements of rank order position in their claims about their own institution and courses. It is a matter of shame for universities that this is necessary in the presentation of evidence, appropriate standards for which are intrinsic to their raison-d’être.
[i] I do not address the deeper philosophical problems raised for the idea of university education that are raised by organising it around the figure of the students as consumer. See, Stefan Collini ‘Browne’s Gamble’. London Review of Books.
[ii] For example, ‘Understanding the information needs of users of public information about higher education’ Report to HEFCE by Oakleigh Consulting and Staffordshire University, August 2010.
[iii] Jacqueline H. S. Cheng, and Herbert W. Marsh —2010) 'National Student Survey: are differences between universities and courses reliable and meaningful?' Oxford Review of Education, 36: 6, 693-712.
[iv] The Russell Group takes a similarly cavalier approach to presenting income data that purports to show a 10 per cent income ‘premium’ for attending a Russell Group university. The paper that is cited is based on the 1995 student cohort, has a response rate of around 27 per cent, and is subject to similar problems as NSS concerning the differential distribution of subjects —with different returns to their graduates) across universities.