Go back

Higher education indicators are of limited use, says EUA

University group says too much attention is given to funding and student enrolment

The indicators used to evaluate and rank higher education institutions are of limited use because they lack scope and are used in different ways, an analysis by the European University Association has concluded.

“There is a limit to what indicators can tell,” authors Tia Loukkola, Helene Peterbauer and Anna Gover said in the analysis report, which the EUA published on 25 May. “They cannot replace more qualitative or descriptive tools such as qualification frameworks, peer reviews or performance contracts, but need to be used in combination with them.”

Elaborating on some of the problems, the authors said there were no “universally agreed on” indicators to gauge the quality of higher education across systems and countries.

“We need as a sector to move toward having reliable ways of measuring our performance,” Loukkola, the lead author and director of institutional development at the EUA, told Research Professional News.

A set of universal standards for quality may be possible, Loukkola said, but “whether it would be fit for purpose is another matter”. She called for “a proper debate” on the subject.

Some measures are overly simplistic, the authors found. For example, “most rankings still see research excellence as a proxy for overall quality”, even though some rankings are aimed at students, for whom this may be less important.

Among the eight rankings analysed, the authors found that “only a limited number of indicators [were] linked, even tenuously, to the quality of education”. The quality of teaching plays a small role in most national excellence initiatives, they said, citing the UK’s Teaching Excellence Framework as a rare example of teaching being prioritised.

Based on data from 27 surveyed institutions, the analysis also found an outsize focus in national rankings on indicators such as the number of students enrolled and the amount of external funding obtained. Only four institutions said they were ranked based on community relations, while two said that rankings included their student-to-staff ratio.

These figures have held steady since a 2015 survey by the EUA, despite a third of systems planning or implementing “significant changes in their models or indicators”.

The authors cautioned against “recycling” data and indicators “for purposes other than the one for which they were originally intended”, and urged use of “transparent and comparable” information in assessment. They also called for “periodic review and adjustment” of indicators, to make sure they are “fit for purpose”.

In a separate comment article for the EUA, Loukkola wrote that the coronavirus pandemic may have disrupted some well-established quality control measures. “Moving the operations online in such an abrupt manner may…have meant that these quality assurance measures were not promptly applied in all occasions due to a lack of time,” she said.