Metrics of Madness

The obsession with academic rankings, which originated from a marketing strategy by US News & World Report, has evolved into a rigid framework pressuring institutions and faculty to conform to predefined metrics. These parameters, often surrogates for educational quality, drive administrators and consultants to focus on gaming the system, compromising the academic and ethical integrity of educational institutions.
By Dr Amitav Banerjee

The 2024 National Institutional Ranking Framework (NIRF) results have delivered some bewildering surprises. In stark contrast to 2023, several ‘wild horses’—previously underestimated institutions—have surged past the ‘trusted thoroughbreds’ of academia! Indeed, this year’s NIRF rankings have proven to be as thrilling and unpredictable as the Indian Derby, the premier horse racing event of the country, held annually at Mumbai’s Mahalaxmi Race Course. One wonders if there were any ‘punters’ behind these surprising upsets—or perhaps not?

The most striking feature of the 2024 rankings was the strong performance of private institutions, which outran many of their public counterparts. Some even ranked ahead of renowned institutions like the Indian Institutes of Science Education and Research (IISERs) and the National Institutes of Technology (NITs). Among the 29 private universities that experienced changes in their rankings, 22 (76%) saw improvements, while only 7 (24%) fell in ranking. In contrast, out of 70 public institutions, 43 (61%) saw a decline in rankings, and only 27 (39%) improved. It appears that private institutions are sprinting ahead of their public counterparts! Astonishingly, even the highly regarded Armed Forces Medical College (AFMC), Pune, was outpaced by some private medical colleges.

The perplexing results of the NIRF raise some critical questions about the methodology behind the rankings. How reliable are these rankings in evaluating institutional quality? Can objective metrics truly capture the nuances of student-teacher relationships, doctor-patient interactions (for medical colleges), or the health of a workplace environment? Can they accurately measure research outputs and balance them against teaching? After all, like living entities, each institution has its own strengths and weaknesses, and it is this diversity that gives them their unique identity.

In terms of research, meaningful studies that require time, dedication, and patience—and have the potential to bring about paradigm shifts—are relatively rare. In contrast, some faculty members may churn out “letters to the editor” and short communications with little substance, creating a false sense of productivity. With the advent of Artificial Intelligence (AI) and Chat Generative Pre-trained Transformer (ChatGPT), this problem is exacerbated, as these tools make it easier for such authors to produce content. Unfortunately, under the NIRF’s current metrics, all publications—whether groundbreaking original research or superficial letters—are assigned equal weight. This creates an imbalanced picture of research output and skews the rankings.

Academic rankings started as a publicity stunt by US News & World Report in the 1980s. Over time, they have solidified into global standards, forcing institutions into a rigid framework that values metrics over meaningful education and research.

Public institutions of repute, such as AFMC, IISERs, and Indian Institutes of Technology (IITs), pride themselves on conducting rigorous, high-quality research that takes time, patience, and perseverance. Their emphasis on producing solid and pioneering work is commendable. However, this commitment to quality comes at a cost in the rankings—a situation that, while unfortunate, highlights the flaws in the current system of evaluation.

In contrast, many private institutions seem to prioritise quantity over quality. These institutions often ‘game the metrics’ by pressuring their faculty to publish prolifically, regardless of the research’s substantive value. Some even go as far as hiring consultants to “mentor” faculty members on how to inflate their citations through questionable practices, such as forming citation rings. While these practices may not be overtly unethical, they gradually erode the academic integrity and research robustness of both the faculty and the institution as a whole.

The Origins of University Rankings
The concept of university rankings has its roots in a marketing initiative. As Cathy O’Neil explains in her book Weapons of Math Destruction, university rankings began around 40 years ago when the periodical US News & World Report found itself in dire need of a strategy to boost its circulation. In 1983, the magazine launched a project to evaluate 1,800 colleges and universities across the United States, ranking them according to ‘excellence.’ This was intended as a tool to help prospective students and their families make informed choices. However, for the magazine, it was more a means of competing with its more successful rivals, Time and Newsweek.

The project turned out to be a successful marketing strategy, both for the magazine and for the institutions that ranked highly. But how were these rankings formulated? In the early stages, the rankings relied solely on the opinions of university presidents, gathered through surveys. In this initial assessment, Stanford University emerged at the top, while Amherst College was ranked the best liberal arts college. Although the rankings proved popular with readers, they also generated considerable discontent among college administrators, who questioned the fairness of the methodology. In response, the magazine defended its rankings by pointing to the ‘data.’

As academic rankings emphasise research quantity over quality, medical institutions turn to pharmaceutical industry sponsorships to meet publishing quotas. This dependency erodes faculty autonomy and scientific integrity, turning education into a numbers game.

As the rankings evolved, so did the methods of data collection. Editors began to search for measurable factors—this led to the creation of various ranking models. While the outputs of these models appeared objective, the inputs were highly subjective and based on arbitrary decisions about what mattered most in education. The process, though seemingly data-driven, lacked scientific rigour and statistical validity. The editors were forced to rely on surrogate measures and hunches, which further undermined the credibility of the rankings. Ultimately, the system measured things that couldn’t truly be quantified, leaving the rankings open to manipulation and criticism.

The Illusion of Objectivity in Educational Metrics
Educational excellence is elusive, and attempting to capture it through subjective inputs that generate supposedly “hard data” only creates a deceptive sense of objectivity. These metrics often silence critics by presenting a façade of irrefutable data. However, if we step back and apply common sense, we see the limitations of this approach. Can we truly quantify the impact of spending four or five years at a college on a single student? Even that individual may only realise the significance of their educational experience years later. How can we, then, assign an immediate score to something so personal?

It is absurd to believe we can measure the learning experiences of millions of students with any degree of precision. A college becomes an “alma mater”—a term that in Latin means “nourishing mother”—and just as one cannot rate their mother through metrics, neither can we adequately capture the emotional bonds formed between students, peers, and teachers. Critical elements such as learning, friendships, happiness, confidence, and the overall experience of college life are deeply personal and unquantifiable. These elements vary greatly among students and teachers, creating a rich diversity that makes the college experience memorable and worthwhile.

In their quest for rankings, the proponents of these systems have ignored the depth and diversity of the learning-teaching experience. Instead, they have relied on surrogate measures presumed to be linked to institutional success—measures such as Scholastic Assessment Test (SAT) scores, student-teacher ratios, graduation rates, the number of publications per faculty member, and the proportion of alumni who donate. These inputs, fed into an algorithm, produce the bulk of the rankings. The remainder of the scores is derived from subjective opinions of officials and academics, sometimes even from those based abroad.

When US News & World Report first published rankings in 1988, they seemed plausible and quickly gained acceptance. Over time, the methodology used by US News became a template for other ranking systems globally, establishing a rigid framework for judging academic institutions. This rigid framework has since become a modern version of the Bed of Procrustes.

In Greek mythology, Procrustes was a host with an unusual obsession: his guests had to fit perfectly into a bed. If a guest was too tall, Procrustes would chop off their legs; if they were too short, he would stretch them to fit. The analogy of Procrustes’ bed is incredibly apt in this context. Just as the mythical figure Procrustes would force his guests to fit into his iron bed—either by cutting off their limbs or stretching them—academic institutions today are forced to conform to rigid, flawed, and often superficial metrics that dictate their rank. Those that don’t fit are stretched beyond capacity, while those that are more naturally inclined towards quality over quantity are metaphorically “chopped” to fit the mould. This creates a uniform but entirely artificial system of ranking that disregards the uniqueness and diversity of academic institutions.

Greek mythology’s Procrustes forced his guests into an inflexible bed, chopping or stretching them to fit. Today, academic institutions face a similar fate, stretched and squeezed by inflexible ranking metrics that fail to capture the true depth and diversity of education.

In a world where rankings determine an institution’s reputation, the obsession with conforming to these standards creates an academic environment where learning, experimentation, and intellectual freedom take a backseat to gaming the system. This is especially true in private institutions, where administrators, advised by consultants, push faculty to focus on increasing publication numbers rather than engaging in meaningful, robust research. This has become the standard for success, with quantity trumping quality, much to the detriment of real academic progress.

Goodhart’s Law and the Manipulation of Metrics
Goodhart’s Law, which states that “when a measure becomes a target, it ceases to be a good measure,” is especially relevant in the academic ranking context. The surrogate measures used to determine rank, such as the number of publications or faculty-student ratios, become the targets institutions strive for, rather than focusing on real educational outcomes. Once these proxies are gamed, the rankings lose their relevance and no longer accurately reflect the true quality of the institutions they claim to rank.

For example, many academic institutions now prioritise publication counts, leading to the production of numerous papers that contribute little to the advancement of knowledge. Instead, they clog journals with repetitive or low-quality studies designed to boost citation numbers. Meanwhile, institutions that emphasise patience and perseverance in research are penalised for not keeping up with the demand for constant output, creating a system where short-term success is valued over long-term academic contribution.

This pressure to publish has particularly devastating effects in the field of medical research. Medical colleges, driven to improve their rankings, now increasingly rely on industry partnerships to fund and conduct research. While collaboration between academia and industry can lead to valuable innovations, it also creates significant conflicts of interest. Pharmaceutical companies, with vested interests in the outcome of research, increasingly dictate the terms of academic studies, often sidelining the autonomy of academic researchers.

Two decades ago, a group of editors from leading medical journals like The New England Journal of Medicine, The Lancet, and The Journal of the American Medical Association highlighted this growing problem. They published a joint editorial warning about the influence of pharmaceutical companies on medical research. The editor of The British Medical Journal (BMJ), Richard Smith, famously commented that medical journals were becoming “the marketing arm of the pharmaceutical industry.”

This undue influence extends to medical journals themselves, which benefit financially from publishing large, industry-sponsored studies. These studies, often skewed to favour pharmaceutical products, not only undermine the quality of research but also diminish the credibility of medical journals as independent sources of scientific information.

The pressure to fit into the rigid metrics of academic rankings often leads to faculty burnout. Serious researchers, forced to prioritise quantity over quality, become disillusioned and leave, eroding the intellectual fabric of institutions.

For faculty members, the constant pressure to publish, secure funding, and climb the academic ladder takes a significant toll. This is particularly true in medical colleges, where the need to produce industry-funded research can conflict with academic values. Faculty members, forced to balance teaching responsibilities with the demands of publishing and research, often experience burnout. Many serious researchers, disillusioned with the system, leave academia altogether, while those who remain are left with little time for meaningful interaction with students.

This environment has created an academic rat race, where metrics matter more than intellectual growth, innovation, or academic freedom. The current ranking system, born out of a marketing ploy, has grown into a rigid, damaging structure that compromises both the quality of education and the well-being of faculty members.

The Way Forward
The madness of metrics, particularly in the context of academic and medical institutions, has led to a cascade of misdirected efforts. As academic institutions and faculty scramble to meet the demands of flawed ranking systems, they sacrifice their autonomy, intellectual freedom, and, ultimately, the integrity of their research. The pressure to conform to rigid metrics has created a Procrustean bed in which no one sleeps easily, least of all the faculty.

As sociologist William Bruce Cameron wisely stated, “Not everything that counts can be counted; and not everything that can be counted counts.” This simple yet profound observation serves as a reminder that the true value of education cannot be reduced to a set of metrics. Instead, it lies in the richness of the teaching and learning experience, the growth of intellectual curiosity, and the development of human potential.

The author is a renowned epidemiologist and currently Professor emeritus at D Y Patil Medical College, Pune.

Leave a Reply

Your email address will not be published. Required fields are marked *