University impact rankings are expanding their global reach, embracing new metrics and institutions, yet despite growing participation, they remain volatile and unpredictable. The recent surge in submissions to the Times Higher Education (THE) Impact Rankings—over 2,500 universities from 130 countries took part in 2025, up 18 percent from the previous year—signals both ambition and instability. While wider geographical representation shows progress, the very factors that drive universities to participate also contribute to the fluctuation in their outcomes year by year.
THE's Impact Rankings evaluate universities on alignment with the United Nations Sustainable Development Goals across research, stewardship, outreach, and teaching . This breadth enriches the evaluation beyond traditional research-output metrics used by QS or US News, but also introduces complexity. As more institutions from Asia and Africa—particularly China, India, Malaysia, Indonesia, Taiwan and African nations—enter the rankings, mobility within the leaderboard increases, with some rising rapidly and others experiencing dramatic shifts as they adjust to these multidisciplinary metrics.
One researcher shared the experience of a small African university submitting its first data this year. Overwhelmed by evidence documentation—timesheets, policy files, community project reports—they were proud to make the cut. Their presence in the Impact Rankings brought excitement on campus: students gathered in common rooms to see their institution recognized on a global stage. That pride, though, came with caution. The following year, minor shortcomings in data submission caused them to slip twenty places. The institution’s morale wavered—not because their SDG work declined, but because the metrics caught inconsistencies, exposing fragility in emerging performance systems.
Meanwhile, leading Western universities remain under scrutiny. Western Sydney University, for instance, retained its top Impact Ranking for the fourth consecutive year. Their consistency stems from robust systems to align curricula with UN goals, well-documented community outreach, carbon footprint transparency, and rigorous sustainability research. But outside that elite tier, volatility is the norm. In Australia, 69 percent of universities fell in overall global ranking across categories like academic and employer reputation, citations, and internationalization . Similarly, 61 percent of UK universities slid downward in the QS rankings . These declines often mirrored budget cuts, shifting visa policies for international students, and regional policy uncertainty.
Consider a British university that saw its Impact Ranking dip even as its sustainability programs grew. Local community gardens flourished, graduate students published groundbreaking papers, and staff volunteered in outreach projects. Yet a change in THE methodology that gave more weight to student learning and mentorship in SDG contexts—part of a validation effort to ensure long-term data integrity —nudged them downward. It wasn’t failure; it was adjustment. Administrators described the experience as a lesson in resilience. They went from frustration to humility: they reallocated staff to improve evidence collection, updated policy transparency, and simplified SDG reporting.
Volatility also stems from universities optimizing for rankings rather than genuine impact. Studies like Meho’s “Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University Rankings” show how institutions in India, Lebanon, Saudi Arabia and the UAE inflated publication volumes through internal citation loops and delisted journals to boost impact scores . Such behaviors erode trust in the system and complicate interpretation. One dean lamented that chasing metrics led to staff burnout and strategic misalignment: they were publishing more papers, but their community labs remained understaffed, and real-world SDG adoption lagged.
This is a modern avatar of Goodhart’s Law—once a measure becomes a target, it ceases to be a good measure. Metrics meant to encourage SDG-aligned behavior have, in some cases, become boxes to be checked. The THE acknowledges this risk, urging universities to treat fluctuations as feedback rather than failure . They encourage inclusion of human expertise alongside data, cautioning that rankings should be tools for improvement, not constraining scripts .
Human-centered stories shed light on what’s at stake. At a Canadian university, a sustainability coordinator shared how the Impact Ranking process unveiled gaps in their outreach documentation. The university realized the splendid solar panels on campus and volunteer tree-planting campaigns lacked formal evidence designations, so they implemented better data tracking. The following year, their Impact Ranking rose impressively—motivating staff, engaging students, and attracting grant funding for new environmental technology programs.
On the flip side, volatility can damage morale. In Australia, one vice-chancellor described institutional downturns as “a gut punch.” Their campus had weathered reduced federal funding, visa uncertainty for international students, and shifting global reputations . Despite strong SDG-aligned research and sustainability policies, they fell in both QS and Impact Rankings. Governance meetings focused on “stopping the bleed,” and boards began demanding quick wins—rather than focusing on long-term impact. Staff morale which had been lifting with community projects suddenly faltered.
The messaging around rankings matters too. U.S. News’ reputation-focused rankings—based heavily on peer assessment surveys and selectivity—introduce history and prestige bias . Some institutions have retaliated: Columbia’s law school withdrew over concerns of data manipulation, calling the system “irrelevant” and “worthless”. These actions underscore a tension: reputational rankings influence funding and student choice, but they don’t always reflect educational quality.
WHO benefits? Prospective students, governments, and employers use impact rankings to identify mission-driven institutions with strong sustainability credentials. High-CPC search terms like “sustainable university rankings,” “SDG-aligned higher education,” and “impact-driven academic reputation” drive visibility that shapes behavior. Universities that show consistency—even if not top-ranked—find it easier to attract students interested in social responsibility and governments keen to fund green research.
Yet volatility can prompt introspection. Some universities use ranking feedback to improve infrastructure. One European technical university impressed when its biotech department used feedback to strengthen sustainability modules. They built a student-led green incubator, got local farmers to test urban agriculture techniques, and integrated SDG themes into every lecture. Their next Impact Ranking jump felt organic, not enforced, and students described a sense of collective purpose that data alone could never capture.
Metrics are not neutral. UNESCO warns that rankings can encourage homogenization, divert focus from teaching and social responsibility, and favor the already advantaged institutions . A small humanities-focused college struggled with this. Its strength lay in refugee inclusion and local community education—deeply SDG-relevant—but with fewer STEM publications or industrial ties, it slipped in Impact Rankings. Yet when faculty and students reflected, they focused not on disappointment but on values: they wouldn’t compromise their mission in pursuit of metrics.
In this context, what role do rankings play? For some, they are levers for government policy and funding. Australia’s drop alerted leaders to systemic funding and visa policy issues . In the UK, Universities UK called for long-term financial commitment to stem reputation losses . Rankings become catalysts for sector-wide reforms.
Behind every ranking number is a story: professors volunteering at local schools, labs testing water quality in low-income neighborhoods, students mapping urban heat islands. These narratives don’t show up in scorecards, but they define impact. As global competition intensifies—with China and India climbing research league tables and more institutions joining SDG rankings from emerging regions —the volatility becomes part of a broader story of transformation.
Perhaps the most heartening part of this evolution is the human spirit that rankings reveal. They show administrators rethinking data systems, faculty redesigning courses, students pressing for ethical research, and communities gaining from university outreach. Even as positions fluctuate, the connections deepen—between universities and society, between data and meaning, between policy and people.
Impact rankings are here to stay. They’re expanding in influence and complexity. But their value lies not in stable rankings, but in their ability to provoke continuous learning, to spotlight unexpected actors, and to remind universities that now, more than ever, their mission extends beyond textbooks and revenue. And in the flux, meaningful work still flourishes.