Rankings: Methodology, momentum and geopolitics


Lists are reassuring. They are, by their nature, ordered and thereby able to convey a sense of authority. The authority of the world university rankings is bolstered by blanket media coverage, attention to methodological refinements, and the subsequent incorporation of the results into institutional marketing.

The media coverage gives an undue sense of seismic shifts in the jostlings for position between institutions. Current students at the California Institute of Technology might think they are luckier than the unfortunate cohort who graduated in 2010-11 because they now attend a better university than Harvard.

The more thoughtful contributions to rankingsthe rankings websites focus on methodology and request clarifications. Of the three big lists – Times Higher World University Rankings, QS World University Rankings, and Shanghai Rankings – varying levels of effort are expended in explaining how they are assembled. But do the answers provide more reassurance and make a persuasive case for objectivity and neutrality? It depends on the context: the wider the frame of reference, the less reassurance.

The Times Higher is the most assiduous in spelling out its approach and in responding to critics. It lists 13 criteria in its scoring scheme, including research citations (30% of the score), research reputation (18%), teaching reputation (15%), research income (6%), income from industry (2.5%), income per academic (2.25%), research output per academic (6%), staff-to-student ratio (as a proxy for teaching quality – 4.5%), ratio of international staff and students to domestic (5%). The two reputational, or ‘perceived prestige’, scores, based on 17,500 returns from academics worldwide and compiled by Thomson Reuters, make up 1/3 of the total.

The QS methodology is simpler and less detailed. Six criteria are indicated, each ‘weighted to an appropriate level’. These are: academic reputation (40% of the score), reputation with employers (10%), faculty-student ratio (20%), citations per faculty (20%), proportion of international faculty members (5%) and proportion of international students (5%). Academic reputation is based on returns from some 33,000 academics and employer reputation on returns from 16,000 employers.

The Shanghai ranking, now published by ShanghaiRanking Consultancy and also known as the Academic Ranking of World Universities, published its results in August. It opts for that which is most easily quantified: scientific research output. Its indicators seem truly idiosyncratic: the number of staff and alumni having won Nobel Prizes and Fields Medals (20% for staff + 10% for alumni), number of ‘highly cited’ researchers, in 21 disciplines, selected by Thomson Scientific (20%), number of articles published in Nature and Science (20%), number of articles indexed in the Science Citation Index and Social Sciences Citation Index (20%), and per capita performance with respect to the size of an institution (10%). More than 1000 universities are ranked by Shanghai (more than the number of Nobel Laureates since 1901) and the best 500 are listed.

In an apparent nod to ‘balance’, for institutions that specialise in humanities and social sciences, the Nature and Science scores are disregarded and the weighting reallocated to other indicators. Teaching is ignored.

One assumption that underpins these exercises – and this is explicit in the QS rankings, which are addressed directly to prospective students – is that students use rankings to select their universities. It seems a reasonable assumption but no information is offered on how important, and to which students. A useful steer can be had, urankingsthough, from 2010-11 data from i-graduate’s International Student Barometer. It shows that 15% of students in the UK (from a sample of 70,500) said that rankings helped them choose a university – well below other factors like friends, website, parents, institutional visit, teachers. In terms of how important, 11 other factors outranked the rankings. The most important was teaching quality, followed by cost.

The Economist speculated last year that self-financed (or parent-financed) students ‘seem to mind more about league-table rankings than those who receive state support’. Again, this is plausible but we are not aware of the data.

The focus on methodology might lead one to conclude that the Times Higher list carries greater authority. It is more painstaking in its approach and uses financial indicators. But QS employs a larger pool for the reputational portion and is the only one to consider post-graduation employability.

But in the world inhabited by university senior management, the methodology hardly matters. When there are many to choose from in the global marketplace, methodology takes a distant second place to running with the one that portrays the most favourable result. This year, the best list for Caltech is the Times Higher. The best for Cambridge and Lausanne is QS. The best list for Harvard and Geneva is Shanghai.

Looking at it through the national lens, the most advantageous list for Australia and Japan is QS. Canadians should prefer the Times Higher, as their country is the only one with more entries in its top 50 than in the other lists. (There are also the many domestic lists; in the US, the National Science Foundation and the Best Colleges Guide from US News might carry more resonance than the big three international lists, though the process has its critics there too.)

This is all dubious. One of the most questionable – and eagerly grasped – aspects of the rankings is the extrapolation of their institution-based methodologies to country league tables. Every year the UK is said to have the second-best HE system in the world, and this year the Irish press lament a crisis in higher education indicated by diminished standings. Such considerations maintain the interest, demand and momentum.

Philip Altbach at Boston College made a crucial point earlier this year when he noted that rankings ‘presume a non-existent zero-sum game’. When an Asian university moves up in the top 100, another will have to move down because there can only be 100 in the top 100. He argued, ‘As fewer American and British universities will inevitably appear in the top 100 in the future, this does not mean that their universities are in decline. Instead, improvement is taking place elsewhere. This is a cause for celebration and not hand-wringing.’

After last year’s wholesale methodology changes, Times Higher advises that this year’s exercise involves just a few further tweaks to the indicators: for research income, the proportion of that income originating in the public sector was dropped in favour of a new measurement of international research collaboration output. The former, it said, lacked comparable data between countries.

This works on its own terms and next year, at least one will be new and improved. But the overall exercise lacks consideration of vast regions of the world. The rankings industry has not yet responded to (occasional) demands that it address its impact on political power within countries and on the broader geopolitical landscape.

In a hand-wringing article in the Hindu last weekend, it was pointed out that the number of Nobel Prizes won by academics based in India equals zero. In a widely re-tweeted article on the THE website, Adam Habib, deputy vice-chancellor at the University of Johannesburg, wrote that they followed the results with bemusement and that he hoped African universities resisted allowing league tables to ‘determine their strategic decision-making’.

He provided a number of specific examples to demonstrate how the reality of his university is not captured by rankings. Almost half the teaching staff taught diploma students, citation indicators disfavour small national systems, a focus on the proportion of PhD candidates disadvantages universities in the developing world that train professionals to first degree and master's level.

He concluded that if comparisons are not ‘coupled to contextual specificity’, they become odious. He said the most benign consequence of rankings would be increasing uniformity across the global higher education system. The most dangerous would be the ‘derailment of the development agenda and the continued reproduction of poverty, inequality and marginalisation in the developing world’.

Numerous international conferences are built around the obligation of universities in responding to cross-border issues. Many top universities that benefit from the rankings also engage in international activities that have at least partly altruistic goals, such as capacity-building through multilateral partnerships with counterparts from developing countries. But the geopolitical influence of rankings pulls in the other direction.

THE argues that the ‘usefulness of rankings outweighs the concerns’. This is easier to argue in the developed world. But elsewhere, governments may be more inclined to spend precious resources in an effort to ‘catch up’ with the best. This is beneficial at least in the sense that it is a preferable use of a nation’s wealth than weapons of mass destruction. But the terms of engagement have been set by others, elsewhere.

The excellent GlobalHigherEd blog refers to the broader issues of power in its sceptical postings on rankings. It argues the need for, inter alia, an independent oversight body to regulate the industry. It suggests that the International Observatory on Academic Ranking and Excellence, established to monitor the quality of rankings, is compromised by the presence of ranking industry representatives on its executive. Although IREG has adopted audit rules for its mandate, it might also be added that the longer-term global implications touched on here do not figure in its terms of reference.

Voices urging caution are few and scattered, and the rankings backlash appears still to be a long way off. It will remain distant until such time as the rhetoric about the role of the academy in addressing global issues becomes more of a reality. This is the less reassuring side of these lists.

WL