Perishable goods – Universities and the measurement of educational quality

In this blog post I responded to an article in the Globe & Mail, regarding the splitting of funding for teaching and research in universities. My piece refers to the lack of adequate measures for teaching “quality” and student learning, and how this makes it impractical to attempt to link government funding to those factors, as well as the trouble with assessing “outcomes” of education in the short-term. Here is a  link to the original post, from October 11 2011: Perishable goods.

Today’s post is a response to the Globe and Mail’s October 11 editorial, Canadian universities must reform or perish. The response from my PSE friends on Twitter was provocative (the tweets can be found here on Storify); those involved in the discussion seemed to agree that while the article highlighted real and pressing problems, the analysis was awry. The issues being addressed in this article — quality, sustainability, and accountability — are relevant and important, but it’s the assumed answers to these issues that are troubling. I want to take a look at what I think are some of the implications of the argument.

One issue is that the article seems to focus on universities primarily as places of undergraduate education, whereas they’re viewed by faculty members (and administrators) as research centres as well. Professors don’t engage only in teaching, since teaching is not generally the sole mission of universities.

The article suggests that universities should train professors in pedagogy, and place more value on teaching in the tenure process; I wouldn’t argue with that. The trouble lies in the suggestion that teaching and research should be split apart and funded separately. Along with teaching-only campuses, this segregation of funding and function would further entrench an existing hierarchy — because universities and faculty members operate in a larger “market”, wherein research is given more prestige and more monetary value than teaching.

I’d argue that it’s a problem to suggest that funding for teaching should be tied to training and assessment (i.e. performance-based funding). Proposing that we fund universities by the “performance” of professors requires a reliable means of measuring this performance, and student learning is assumed to be the outcome. But a reliable measurement of student learning is like the “Holy Grail” of education research at all levels. Tests developed in the United States have provided some limited answers, but all standardised tests are somewhat fallible due to the annoyingly individualistic experience of education; tests mostly fulfill a function within systems, rather than providing real knowledge of knowledge, as it were.

While it’s possible to create systemic criteria — such as Ontario’s University Undergraduate Degree Levels Expectations (UUDLEs), for example — these measure a set of pre-defined skills determined by design, which excludes a good deal of what students may experience as well as future effects that learning may have on them.

Philosophical and practical difficulties arise from relying on measurable data in education: how do we begin to ask what numbers could show “proof” for outcomes like “critical thinking”, “creativity”, “innovation”, and “knowledge” itself? Will these measurements of  “learning outcomes” take into account student “inputs”? How will they do this? After all, education is a two-way process and not all students have the same capacities, nor do they all contribute the same amount of work.

Another related issue is that the article posits multiple, potentially conflicting goals for university education. Should the role of the university be to train workers for the knowledge economy, or to “bring the values and practices of a liberal arts and science education to the masses” — or both at once? If liberal education is the goal, then hiring more research professors, whose salaries the article refers to as a problem, is the best way to expand the system—rather than splitting teaching from research as suggested. That segregation has meant that enrollments are often expanded on the backs of part-time and contract teaching faculty who can be paid less and provided fewer or no benefits. The Globe’s editorial highlights this phenomenon without linking it to the expansion of enrollments alongside the separation of research from teaching.

The critique of current professors’ performances and salaries fails to get at the heart of a decades-old problem, mainly through an over-emphasis on the present outcomes of those long-term processes. In essence this is an individualizing critique that assumes professors who don’t want to teach, rather than 40 years of postsecondary expansion and economic change, are primarily responsible for the declining quality of undergraduate education. Yet professors don’t create provincial policy, nor do they set the limits on tuition fees, or even determine the number of students in a course. Tenure-track and tenured faculty are also juggling increased research and administration loads as competition becomes more intense. All these things have a strong effect on the environment in which teaching and learning takes place. If and when the concept of “quality” is focused on professors’ classroom performance, and on teaching and research as easily separable, then a narrow analysis — and flawed solutions — are likely to result.

The proof of the pudding

Link to the original post from September 21, 2010: The proof of the pudding.

Throughout the first few weeks of September, we’ve seen a number of reports released, both in the U.S. and Canada, discussing and describing (quantitatively) the positive outcomes that students generate from obtaining university credentials. These reports have appeared at roughly the same time as the international university “rankings“, which were unleashed around the middle of the month–along with OECD education indicators and Statistics Canada reports on tuition fees and national education.

The strategy here seems straightforward enough; after all, at the beginning of the school year, it’s not primarily students but rather their parents–in many cases–who are concerned about whether the college or university experience is going to be “worth the investment“. (I would argue that the parents should also look to their own departing children if they want to know the answer to that question-!) It’s a great time to capture an audience for the debate, since students beginning their last year of high school at this time (most of them still living at home) will also be searching for relevant information about possible PSE options.

These articles are reports stir up the debate about public vs. private funding of PSE, about the rising proportion of university revenue generated by tuition from students and families, and the cost to the state of educational expansion. They also pitch university education primarily in terms of its economic value–not only to individuals, but also to the state (since educated people are “human capital”). Education correlates with increased income over one’s lifetime, with better health (saving taxpayer dollars), and with inter-generational class mobility. These arguments, along with those citing tough times for the government purse, are frequently used to support a pro-tuition-increase position both in the media and in policy debates.

All these points may seem valid enough until we consider the fact that while students may all technically pay the same amount in tuition (say, at a given university or in a particular program), they don’t all receive the same “product”. And universities generally advertise to them as if the same product is really on offer to everyone. Which it certainly isn’t–the costs alone (which exceed tuition) are borne in entirely different ways by different students, a point briefly raised by Charles Miller as quoted in this article. If my parents pay for my tuition and living expenses, then what costs am I absorbing over the period of a 4-year undergraduate degree? How does this compare to a situation without parental support? Low-income students are less likely to have family help and more likely to take on a large debt burden; they are less likely to have savings accounts and personal investments, less likely to be able to purchase cars and condos when their student days are done.

Aside from the variation in economic circumstance, students also bring differences in academic ability and social and cultural capital to their degrees, which means that development differs for each person and so does their overall capacity for career-building.

Not only does university have different “costs” for different people; it also has highly variable outcomes. Some students will land solid jobs and find themselves upwardly mobile after completing a bachelor’s degree. Others may continue to a Master’s or even a PhD and discover that gainful employment impossible to find, for a variety of reasons. There’s also the question of whether students obtain jobs in their chosen fields–or within a particular income range, for that matter. And once they do find employment, earnings differences by gender (for example) still persist to the extent that women in Canada still earn significantly less than what male employees take home for equivalent work.

Another form of quantitative justification, the rankings game is an attempt to make the intangible–the “quality” of education, or of the institution–into a measurable, manipulable object. Part of the yearly ritual is the predictable squabble over methodology, which generates much commentary and debate, particularly from those institutions that have found themselves dropping in the international league tables. This quibbling seems ironic given that all the rankings are embedded in the same general global system of numeric calculation, one that feeds international competition and now constitutes and entire industry that rides on the backs of already overburdened and under-funded university systems. While the public may rail against the supposed over-compensation of tenured professors (salaries represent the universities’ biggest cost), institutions continue to engage in the international numbers game, pumping money into the yearly production of “free” data that are then made inaccessible by the ranking organizations (who profit from their use).

Education reports, with their quantitative indicators of the economics “benefits” of higher education, are a part of the same overall tendency to assess, to compare, to normalize and standardize. Earnings-related numbers often provide rhetorical support for policy agendas that involve higher tuition fees, since proving the “private” benefits of education means that we can charge the user or “consumer” of education for access to these (eventual) benefits.

Rankings and statistics serve as a means of informing risk assessment–for governments, when funding is increasingly based on “performance”, and for students, when it’s about choosing the “better” university. But no numbers can truly gauge or alter the inherent risk of education and knowledge, the ineffability of the paths we take to discovery, the serendipities of fortune and temperament that can lead one person to the gutter while another may hit the heights of achievement. Students have moments of inspiration, they meet undistinguished professors who never publish but turn lives around. They form unexpected friendships and stumble on opportunities, skewer themselves on pitfalls both obvious and unseen.

In other words we cannot ac/count for this most joyful and painful side of our educative experience–the unknown element which is frequently the most formative one; and the more we attempt to inject certainty into this process, the more we set ourselves up for disappointment. This doesn’t mean there’s no use for numbers, for evaluations and assessments, for attempts to improve our universities. But sensible decision-making, whether by students or by governments, will always involve more than a measurement.