This article was originally published on The World Bank’s Education for Global Development blog.
It is beyond doubt that rankings have become a significant part of the tertiary education landscape, both globally and locally.
In this landscape, rankings have risen in importance and proliferated in unimaginable ways. It’s become commercialized and, with it, so has the sophistication of companies and organizations that rank colleges and universities. Undoubtedly, rankings now play such a big role in shaping the opinions of current and potential students, parents, employers, and government about the quality of tertiary education institutions.
The emergence of this rankings obsession is, at the same time, a legitimate source of concern about its misuse, especially when it is used solely for promotional purposes, or, even worse, when it becomes the main driver of policy decisions for governments and tertiary education institutions. Nowadays, it is common to observe entire policies and programs from governments apparently more concerned with the position in the rankings than on the relevance of their tertiary education institutions. Sometimes, this results in diverting significant amount of resources to some institutions while limiting support for others. If rankings become the end rather than the means towards better tertiary education, then this should be a matter of concern. An excessive importance given by institutional and government decision-makers on rankings may be both disturbing and alarming.
It is evident that rankings do have a value as a reference and as basis for comparison. However, they do not always serve as the best proxy of quality and relevance of tertiary education institutions. Let’s keep in mind that any ranking is eventually an arbitrary arrangement of indicators aimed at labeling what it is being pre-defined by the ranker as a “good” educational institution.
Those in favor of rankings –especially rankers- may argue that in the absence of sound and comparable information, rankings are the best option for determining the quality of colleges and universities. However, as the saying goes, the devil is in the detail. This pre-defined vision of an ideal institution does not always take significant contextual differences into consideration. It tends to impose a one-sided vision of an institution –mostly a traditional research oriented and highly selective university- which is not necessarily the most responsive to the varied needs of the communities where these institutions are located.
Most well-known rankings tend to equate institutional quality with research productivity, which is measured either by the number and impact of their publications in peer-reviewed journals, or the selectivity on their admission processes. Of course, such a proxy of quality downgrades institutions that place greater emphasis on teaching and prolongs the “publish or perish” principle. In pursuing better position in the rankings, most probably internal and external funding may tend to favor academic programs or research units that are more inclined to get involved in the dynamics of researching and publishing. Finally, it diminishes the role of other important functions of tertiary education institutions such as teaching and public service.
Another dimension of rankings intends to measure “reputation” by gathering opinions (which are unfortunately not always competent and objective) from employers, field experts and/or alumni. Quite expectedly, people tend to favor certain institutions regardless of the quality of their academic programs just because the fame or recognition that precedes them. As such, other institutions and programs that may not have a famous name, but are providing meaningful contributions to the society by producing the graduates required for their local and regional economy, fall by the wayside.
This also means that an institution which is not highly selective and tends to serve students with lower socio-economic-academic backgrounds, is most likely to be left out of the rankings even though the “value added” that it provides to its students may be proportionally higher than that of one of those highly selective institutions that have already had the chance to attract better-off students.
Similarly, the appropriateness of measuring the reputation of a tertiary education institution by its alumni’s job profile is not exempt from criticism. Jenny Martin, a biology professor at the University of Queensland in Australia, puts it very well: “International rankings are meant to identify the best workplaces, yet none of the rankings evaluate important indicators like job satisfaction, work-life balance, and equal opportunity”.
An alternative approach that is being explored by a number of tertiary education systems is aimed at fostering institutions to “benchmark” with peers in a less disruptive and more proactive way than the rankings. The benchmarking approach allows for a meaningful comparison of institutions that is based on their own needs. It includes some elements that are already incorporated in rankings but allows institutions to customize comparisons based on their performance vis-à-vis the best, average, or lowest performing institutions of its type. This approach makes possible for institutions to define their own niche, and reduces the pressure to blindly follow a unilateral definition of a “good institution.”
A good example is the University Governance Screening Card Project that brings together more than 100 tertiary education institutions from seven countries in the Middle East and North Africa (MENA) region. Sponsored by the World Bank and the Centre for Mediterranean Integration, this initiative is aimed at enhancing institutional governance and accountability through capacity building measures based on an evidence-based and inclusive approach.
Participating institutions can benchmark with peers on matters related to governance, quality and management. A number of them have developed detailed actions plans and related capacity-building measures in order to improve their performance. Similar initiatives are being established in other countries as well, including benchmarking projects in Africa, and India.
It would be naive to assume that rankings will lose their importance in the future. However, while recognizing that they are here to stay, we must be aware of their many limitations, their intended and unintended biases, and their convenience-based usage by institutions and even national governments.
Publication does not imply endorsement of views by the World Economic Forum.
To keep up with the Agenda subscribe to our weekly newsletter.
Author: Francisco Marmolejo is the World Bank’s Lead Tertiary Education Specialist and Coordinator of its Network of Higher Education Specialists.
Image: People walk into the quadrant of Clare College at Cambridge University in eastern England. REUTERS/Paul Hackett.