Despite the criticisms associated with university rankings, they have evolved over time and become a phenomenon that attracts more and more attention every year from a diverse range of groups – whether that be governments, employers, university authorities, or young people keen to choose the university that can best provide them with the most value for their future professional development.
Higher Education – Origin of University Rankings
The high interest in nationally and internationally-ranked, and ordered lists of Higher Education Institutions (HEIs) based on teaching quality, and other criteria, has beeng reflecting internationalisation of higher education. However, as Felipe Martínez Rizo (2010)1 mentions in his review of the history of university rankings, the idea of classifying institutions based on the perception of their quality was established in 1888, as he quotes from Webster (1986).
At the time, psychologist James Mckeen Cattell made such a proposal (supported by the institutional affiliation of leading scientists), and as early as 1910 the Commission of the U.S. Bureau of Education began publishing an annual report of statistical data for the purpose of classifying institutions, as reported in the literature review by Salmi & Saroyan, 20072.
Recent History of University Rankings
A more recent history of university rankings shows that in the early 1980s, university rankings were represented by publications of important economic media based in the United States, and from the early 1990s in the United Kingdom. Among such media entitles were the US News & World Report, Financial Times, The Economist, and the Wall Street Journal.
During the “early infancy” of these listings, the focus was primarily on attracting the attention of governments, ministries of education, and university authorities. They offered useful information when it came to applying for funding, grants, and other resources, or for academic purposes to improve strategic plans for teaching and research. Nowadays, the internet and globalisation have made it possible for such information to be massively distributed and made use of at different levels of society.
Prospective students, their parents, and employers are currently the main consumers of information related to university rankings.
The phenomenon of university rankings has been the subject of ongoing research since the late 1980s and has generated heated debate in governments, creating great interest in 2004 in having groups of experts3 to promote more reliable forms of measuring university performance, and at the same time encourage the quality of teaching. The motivation here was to relieve some of the pressure that Higher Education Institutions (HEIs) experience due to being forced to maintain an image of competitiveness, which may be reflected unfairly in the rankings4.
Internationalisation of Higher Education Institutions Rankings
At the beginning of the 21st century, the internationalisation of university rankings became apparent in 2003 with the first globalised publication of The Academic Ranking of World Universities (ARWU) by Shanghai Jiao Tong University.
Subsequently, other rankings publications have also stood out worldwide, such as those published by: QS World University Rankings Portfolio (since 2004); THE World University Ranking (since 2004); Webometrics Ranking of World Universities (as of 2004); UniRank University Ranking (since 2005); SCImago Institutions Ranking (since 2009); RUR World University Ranking (since 2010); URAP-University Ranking by Academic Performance (as of 2010); The Global Employability Ranking and Survey (since 2011); CWUR-World University Ranking (since 2012); U-Multirank Top Performing Universities (since 2014); and among others, the most striking version of Reuters: The World’s 100 Most Innovative Universities (since 2015).
If you want to know more about the 13 most popular organisations that produce World University Rankings, simply follow this link.
Criticism of University Rankings
Throughout their existence, rankings have been subject to deep criticism5, which has forced them to develop and evolve.
On the one hand, many of the studies have focused on the shortcomings of the methodologies used by the companies or publishers that compile the rankings in order to make their selections in the listings:
- The assessment criteria for considering the quality of universities6.
- The significance given to each criterion used to be heavily influenced by the impact of publications of articles7 in indexed scientific journals and patent citations.
- The algorithmic formulae underpinning the benchmarking, among others.
- The source of the data for the analyses.
Other scientific assessments have focused on the unfairness of comparing different types of universities as if they were intended to be the same in all their functions and educational offerings in the first place. In other words, small, large, public, public and private, only private universities8, with more or less research9, have all been evaluated together with those that are more focused on teaching or entrepreneurship.
After taking these university rankings criticisms onboard, it would be fair to say that it is now clear that analysing Higher Education Institutions according to their particular sector would be more appropriate than creating false competition between different types of education providers.
Another recurring criticism of most university rankings, especially prior to 2000, is that they excluded the measurement of the quality of teaching activity and valued it merely as an addition to research. This mistake effectively made it seem as though having good researchers necessarily meant that the university’s professors also gave high quality classes and that the facilities are excellent, but of course there very well may not be any connection between these different factors.
According to the report of the Association of European Universities (2011)10, the existing indicators on teaching at that time were at best indirectly related to measuring the quality of teaching. Therefore, they would not be the most useful to reflect a comparison of this critical aspect of a university’s quality.
As a benefit of the various pointed criticisms expressed by scientists and numerous institutions about university rankings, the agencies, and bodies that produce them have adapted part of their methodology. Some have updated their selection of ranking indicators as well as their algorithms to reduce bias and misleadingness, with the aim of offering more reliable and useful ranking categories that allow for the use of an interactive tool to compare universities depending on what type of parameter the reader is interested in – one such example is U-Multirank, which launched with the initiative of the European Commission in (2014). Another example would be the CWTS Leiden Ranking, although this publisher looks exclusively at elements related to research activity.
In this context of the criticisms that university rankings have received, it is also worth highlighting the research efforts carried out by the INORMS research group in the face of the problems they have detected with regard to methodology, validity, and the real significance of university rankings11. They synthesised feedback from various community discussion panels open to academics, research-support professionals and related groups. From this, they developed 20 principles related best practices when it come to the governance of rankers (such as the declaration of financial conflicts of interest), transparency (of aims, methods and data), measuring what matters (in line with a university’s mission) and rigour (the indicators are a good proxy for what they claim to measure).
The researchers’ findings showed that the best-known and most influential rankings lack rigour, do not always measure what is relevant, and lack transparency. Perhaps the least problematic feature of the agencies that produce university ranking listings is that they are well-managed. Despite the existence of some issues with university rankings, none of the ranking publishers achieved a 70% of the favourable score.
In any case, as another renowned researcher and emeritus professor Ellen Hazelkorn of the Dublin Institute of Technology lamented and comments in her 2018 article12:
The fact that the rankings are methodologically inadequate, their indicators not meaningful enough and their data unreliable, has not stopped governments and universities around the world from using and adopting them.
Emeritus professor Ellen Hazelkorn
The publication of global rankings still has a long way to go in order to take the criticisms of university rankings into account and implement unified criteria for appropriate indicators, for example to measure the quality of teaching13 or to give more appropriate and proportional weighting to the criteria used in comparisons, among others already mentioned
Recent Developments in University Ranking Lists
The methodology for calculating the rankings is being continually updated, and new ranking publications have been created that allow universities to be analysed by subject area, and even consider their research activities in isolation, combined with such important factors as Teaching, Impact on Society, or Employabilit14.
New types of ranking publishers have emerged over time that offers assessment criteria such as the visibility and impact of their online content, which does not directly reflect on a university’s quality of teaching, but does reflect the visibility of the university as a brand and its efforts in digital communication to disseminate its content.
Currently, there are university rankings publishers such as Webometrics that incorporate factors into their algorithms that are related to an institution’s digital presence, the impact of the quality of their online content, and their connections with other networks of external institutions. Along the same lines of online presence, UniRank University Ranking is another ranking publisher that provides information on the visibility of universities on social networks.
More recently, in 2020, a website was developed that allows quick and easy access to more than 110 university rankings from different publishers. By visiting Universityguru.com it is possible to consult data on more than 11,000 universities worldwide, alumni reviews from former students, and many other kinds of useful information related to Higher Education Institutions.
1.Felipe Martínez Rizo (2011) Los rankings de universidades: una visión crítica, Revista de la educación superior, sup vol.40 no.157. http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S0185-27602011000100004
2.Salmi, J., and Saroyan, A. (2007). Leagues tables as policy instruments: Uses and misuses. Higher Education Management and Policy, Vol. 19, No. 2, pp. 31-68.
3.International Expert Group Created to Improve Higher Education Rankings https://web.archive.org/web/20070227175156/http://ed.sjtu.edu.cn/rank/file/IREG.pdf see: http://ireg-observatory.org/en_old/about-us
4.European Commission welcomes launch New international university ranking: U-Multirank https://ec.europa.eu/commission/presscorner/detail/en/IP_14_548
5.Felipe Martínez Rizo (2011) Los rankings de universidades: una visión crítica, Revista de la educación superior, sup vol.40 no.157. http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S0185-27602011000100004
6.Tia Loukkola, Helene Peterbauer, Anna Gover (May 2020) ” Exploring higher education indicators”,European University Association. https://eua.eu/downloads/publications/indicators%20report.pdf
7.“Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings”, Yves Gingras (2016) Bibliometrics and research evaluation, History and foundations of Information Science: https://mitpress.mit.edu/books/bibliometrics-and-research-evaluation
8.The Carnegie Classification of Institutions of Higher Education was originally published in 1973 and has been updated in 1976, 1987, 1994, 2000, 2005, 2010, 2015 and 2018. https://carnegieclassifications.iu.edu/downloads/CCIHE2018-FactsFigures.pdf see also the update:2018 -UpdateFacts & Figures of Carnegie Classification: https://carnegieclassifications.iu.edu/definitions.php
9.Bustos-González, A. (2019). Transition from teaching university to research university: A problem of academic information, taxonomies or university rankings? Profesional De La Información, 28(4). https://doi.org/10.3145/epi.2019.jul.22
10.Global university rankings and their impact, European University Association Report (2011): https://eua.eu/downloads/publications/global%20university%20rankings%20and%20their%20impact.pdf
11.Lizzie Gadd and Richard Holmes (2020), Rethinking the rankings: https://lizziegadd.wordpress.com/2020/10/16/rethinking-the-rankings/
12.Ellen Hazelkorn (2018) Reshaping the world order of higher education: the role and impact of rankings on national and global systems, Policy Reviews in Higher Education, 2:1, 4-31, https://doi.org/10.1080/23322969.2018.1424562
13.Lucy Strang, Julie Bélanger, Catriona Manville and CatherineMeads (2016) Review of the research literature on defining and demonstrating quality teaching and impact in higher education. Higher Education Academy. https://bit.ly/3paLdD2
14.Ranking based solely on the perspective of the international employer: https://emerging.fr/#/geurs2020