Ask a student which university she thinks is best, and most would say, “Well, my own university, of course.” But as the bastion of no-nonsense Canadian journalism, Maclean’s magazine would politely disagree.
I am, of course, referring to the annually published university rankings issues from Maclean’s, among a number of other news magazines and blogs. Usually issued mid-fall, the rankings editions typically spark conversations and light jabs between student union executives. But more importantly, they help thousands of high schoolers through what is often their first major life decision – where to get their education.
Maclean’s and others undertake ranking with precision and objectivity. Respectable, considering the personal biases that come along with one’s own alma mater. It’s very scientific – several factors, such as proportion of faculty with terminal degrees (i.e. PhDs), number of library holdings, are measured and weighted against one another. The top performing universities typically have medium-to-high scores in each category.
I’ve been reading the rankings issue for years, but it wasn’t until recently that I closely examined the ranking formula. And based on the experiences of my peers whom I represent, I wonder if these rankings are truly measuring the right things.
Let’s start with the National Survey on Student Engagement (lovingly adorned NSSE, or ‘nessie’ in the PSE sector), which is used as a factor for many rankings, including Maclean’s. I always get a little cautious around interpreting NSSE results, because it’s just so difficult to get an accurate read on quality of education, particularly from subjective questions such as, “If given the chance, would you choose to reattend your current university?” It’s difficult to pick that question apart.
NSSE metrics sparked conflict between Maclean’s and a number of Canadian universities back in 2007. Many universities asserted that NSSE (among other ranking considerations) were unacceptable for a large number of universities and refused to share their NSSE results with Maclean’s.
It’s difficult to quantitatively measure learning experiences and outcomes. But ask a student – we know good teaching when we see it. So the metrics that get to the heart of educational quality shouldn’t be asking about outcomes; instead rankings should measure the processes and inputs that make their way to the classrooms and student services. For example, what’s the student-to-faculty ratio? What percentage of the budget is allocated towards undergraduate education and support services? And furthermore, there are many immeasurable factors such as how culturally diverse a campus is, or how welcoming and safe the campus is to underrepresented groups. For many students, I imagine that these factors might be more important to them than the number of library holdings.
I also find it interesting that metrics carry different importance based on which of three university categories you fall into: primarily undergraduate, comprehensive, or medical/doctoral (i.e. research-intensive U-15 schools). The reason? These types of schools are fundamentally different from one another and provide a clear delineation of an undergraduate focus vs. a graduate/research focus. But the issue is that undergraduate education and opportunities should be at the heart of every university. Rankings should perhaps separate out by different categories, such as program type, so that students get a better picture of what they actually require from their university experience.
The province is now looking into using metrics like the NSSE to measure teaching and learning. While accountability and goal setting is important, it begs the question: with a new differentiation framework now in place, will our university system begin to resemble the Maclean’s rankings?
Another factor that Maclean’s uses to rank universities is reputation. I’ve been reading the university rankings issue for years, but it wasn’t until recently that I actually examined the ranking formula. To my surprise, ‘reputation’ is used as a metric – but doesn’t this create a self-feedback loop? Why should reputation be a factor in determining future reputation? I’m from McMaster, and I’m always disappointed that McMaster isn’t higher on the list. I have my doubts that we’ll end up climbing the list to take on U of T or McGill – thanks in part to the entrenched reputation factor in university rankings. (Side note: McMaster is 1 of 4 schools that makes the global Times Higher Education Top 100 universities – so when it comes to rankings, this is the list we use.)
For that fresh-faced student in high school with aspirations of attending a Canadian university, university ranking systems may provide them some insight into academic and student life at our institutions, but I’d argue that the pictures these systems paint can be a bit misleading and showy. What university ranking systems do impart on these prospective degree-holders is the importance and diversity of Canada’s post-secondary options – with so many world-class institutions scored across a variety of criteria, high school students need to do their homework to ensure that the institution they choose will provide them the particular mix of quality academics, robust student supports, and vibrant university community that they’ll need to succeed and thrive.
OUSA Steering Committee Member