Add news
March 2010
April 2010
May 2010June 2010July 2010
August 2010
September 2010October 2010
November 2010
December 2010
January 2011
February 2011March 2011April 2011May 2011June 2011July 2011August 2011September 2011October 2011November 2011December 2011January 2012February 2012March 2012April 2012May 2012June 2012July 2012August 2012September 2012October 2012November 2012December 2012January 2013February 2013March 2013April 2013May 2013June 2013July 2013August 2013September 2013October 2013November 2013December 2013January 2014February 2014March 2014April 2014May 2014June 2014July 2014August 2014September 2014October 2014November 2014December 2014January 2015February 2015March 2015April 2015May 2015June 2015July 2015August 2015September 2015October 2015November 2015December 2015January 2016February 2016March 2016April 2016May 2016June 2016July 2016August 2016September 2016October 2016November 2016December 2016January 2017February 2017March 2017April 2017May 2017June 2017July 2017August 2017September 2017October 2017November 2017December 2017January 2018February 2018March 2018April 2018May 2018June 2018July 2018August 2018September 2018October 2018November 2018December 2018January 2019February 2019March 2019April 2019May 2019June 2019July 2019August 2019September 2019October 2019November 2019December 2019January 2020February 2020March 2020April 2020May 2020June 2020July 2020August 2020September 2020
News Every Day |

University Rankings Don’t Measure What Matters

Sioux McKenna

Education,

The methodology underpinning most ranking systems would be unlikely to pass as a third year student’s research project.

International rankings of universities are big business and big news. These systems order universities on the basis of a variety of criteria such as student to staff ratio, income from industry, and reputation as captured through public surveys.

Universities around the world use their rankings as marketing material and parents and prospective students make life choices on the basis of them.

But the methodology underpinning the Quacquarelli Symonds and Times Higher Education ranking systems and others like them would be unlikely to pass as a third year student’s research project. And yet high-status universities around the world spend time and money competing in this extravaganza rather than pointing out that the Emperor is wearing no clothes.

Why would they when the rankings reinforce their position as institutions of choice for those who can afford their fees?

As a researcher of higher education, I find it worrying that we’re held captive by these glitzy spectacles.

Imagine if a student indicated that their research project would be to develop a ranking of all universities. They would allocate 20% to whether current students and the general public thought the university was prestigious, 5% for the number of Nobel Prize winners on the institution’s staff, 30% for the number of research publications, and so on. Any academic advisor would throw the proposal out.

Some of these criteria are subjective. The weightings are arbitrary, important aspects of many universities are missing and the averaging of unrelated aspects to a final number is simply poor science which does not tell us much about the institution at all.

And yet this is exactly how rankings are determined.

The ranking method doesn’t add up

The methods behind the international university ranking systems vary but the underpinning methodology is identical. Convert proxy measures of a few academic activities into numeric metrics, add these together and come up with a ranking of institutions.

The criteria may be entirely unrelated to each other or may be poor proxies of the academic activity being measured. Reputation surveys and student throughput, for example, probably tell you more about how wealthy, and therefore selective, the university is than anything about the quality of their teaching.

Furthermore, the weighting of each criterion is almost entirely arbitrary. If publications and citations are worth 20% and web visibility is worth 10%, you will get one order of institutions. Change this to 25% and 5% and the entire list rearranges itself.

Many of the criteria could be seen as descriptive rather than evaluative. High pass rates and student:staff ratios are separate criteria in many systems, yet most academics would argue that they are measuring related issues.

Privilege begets privilege too, so universities that are the most prestigious and charge the highest fees can be the most selective in student enrolment and staff recruitment. Because higher education is not a meritocracy but largely reinforces social stratifications, these universities will then celebrate their inevitable success in getting students to complete their studies and graduate.

Many rankings focus on reputational rankings which ask employees, graduates, and the general public to indicate which institution is the best. But this becomes circular: a strong reputation leads to a strong reputation. This benefits well known universities and neither reflects nor benefits teaching and learning, research, community engagement, or any other academic activity.

When less wealthy universities attempt to compete, the cost can be a drain on resources that could be spent on activities more attuned to their context.

Many universities in South Africa are now chasing these rankings, even while admitting that they’re problematic. But participation is neither innocent nor harmless.

It’s not innocent because universities know the average citizen believes that these rankings say something about quality. This then influences choices as to where to study and work and whom to employ. Instead of using their research skills and academic integrity to pull these games apart, they expend a great deal of energy trying to improve their position.

Increasing research output in the global South is essential. If the rankings drive this process, well and good. But it’s harmful if the focus on publications and postgraduates comes at the expense of important factors that aren’t used in rankings.

What counts

Ranking systems don’t concern themselves with whether the university takes its community engagement responsibilities seriously. They don’t consider who students are and where they want to go. Ranking systems care only for the market – and higher education is a very large market.

Academics can and should speak out against these neocolonial processes that position all universities as striving to be identical and competing for market share.

But it’s difficult for universities to opt out of these games. The university where I work refuses to engage with ranking organisations. And yet it’s still included in these systems, as they draw on publicly available data.

Despite having among the highest undergraduate success rates and publication rates in South Africa, the lack of medicine and engineering programmes works against it. So too does its strong focus on community engagement and its small size – though these might be exactly why the university is a good fit for many.

The dodgy methodology is stacked against the institution but far more problematically, it’s stacked against most of the purposes set for higher education in South Africa’s white paper of 1997.

Nowhere do these systems concern themselves with transformation, social justice or the public good.

The Conversation

Sioux McKenna, Director of Centre for Postgraduate Studies, Rhodes University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image: Reuters



Read also

Photo: Classy Jose Mourinho fulfils promise to journalist ahead of Europa League game

Iraq oil minister expects deal to up oil exports

AFC defends Champions League safety after virus-hit Al Hilal axed



News, articles, comments, with a minute-by-minute update, now on Today24.pro




Today24.pro — latest news 24/7. You can add your news instantly now — here