Perishable goods – Universities and the measurement of educational quality

In this blog post I responded to an article in the Globe & Mail, regarding the splitting of funding for teaching and research in universities. My piece refers to the lack of adequate measures for teaching “quality” and student learning, and how this makes it impractical to attempt to link government funding to those factors, as well as the trouble with assessing “outcomes” of education in the short-term. Here is a  link to the original post, from October 11 2011: Perishable goods.

Today’s post is a response to the Globe and Mail’s October 11 editorial, Canadian universities must reform or perish. The response from my PSE friends on Twitter was provocative (the tweets can be found here on Storify); those involved in the discussion seemed to agree that while the article highlighted real and pressing problems, the analysis was awry. The issues being addressed in this article — quality, sustainability, and accountability — are relevant and important, but it’s the assumed answers to these issues that are troubling. I want to take a look at what I think are some of the implications of the argument.

One issue is that the article seems to focus on universities primarily as places of undergraduate education, whereas they’re viewed by faculty members (and administrators) as research centres as well. Professors don’t engage only in teaching, since teaching is not generally the sole mission of universities.

The article suggests that universities should train professors in pedagogy, and place more value on teaching in the tenure process; I wouldn’t argue with that. The trouble lies in the suggestion that teaching and research should be split apart and funded separately. Along with teaching-only campuses, this segregation of funding and function would further entrench an existing hierarchy — because universities and faculty members operate in a larger “market”, wherein research is given more prestige and more monetary value than teaching.

I’d argue that it’s a problem to suggest that funding for teaching should be tied to training and assessment (i.e. performance-based funding). Proposing that we fund universities by the “performance” of professors requires a reliable means of measuring this performance, and student learning is assumed to be the outcome. But a reliable measurement of student learning is like the “Holy Grail” of education research at all levels. Tests developed in the United States have provided some limited answers, but all standardised tests are somewhat fallible due to the annoyingly individualistic experience of education; tests mostly fulfill a function within systems, rather than providing real knowledge of knowledge, as it were.

While it’s possible to create systemic criteria — such as Ontario’s University Undergraduate Degree Levels Expectations (UUDLEs), for example — these measure a set of pre-defined skills determined by design, which excludes a good deal of what students may experience as well as future effects that learning may have on them.

Philosophical and practical difficulties arise from relying on measurable data in education: how do we begin to ask what numbers could show “proof” for outcomes like “critical thinking”, “creativity”, “innovation”, and “knowledge” itself? Will these measurements of  “learning outcomes” take into account student “inputs”? How will they do this? After all, education is a two-way process and not all students have the same capacities, nor do they all contribute the same amount of work.

Another related issue is that the article posits multiple, potentially conflicting goals for university education. Should the role of the university be to train workers for the knowledge economy, or to “bring the values and practices of a liberal arts and science education to the masses” — or both at once? If liberal education is the goal, then hiring more research professors, whose salaries the article refers to as a problem, is the best way to expand the system—rather than splitting teaching from research as suggested. That segregation has meant that enrollments are often expanded on the backs of part-time and contract teaching faculty who can be paid less and provided fewer or no benefits. The Globe’s editorial highlights this phenomenon without linking it to the expansion of enrollments alongside the separation of research from teaching.

The critique of current professors’ performances and salaries fails to get at the heart of a decades-old problem, mainly through an over-emphasis on the present outcomes of those long-term processes. In essence this is an individualizing critique that assumes professors who don’t want to teach, rather than 40 years of postsecondary expansion and economic change, are primarily responsible for the declining quality of undergraduate education. Yet professors don’t create provincial policy, nor do they set the limits on tuition fees, or even determine the number of students in a course. Tenure-track and tenured faculty are also juggling increased research and administration loads as competition becomes more intense. All these things have a strong effect on the environment in which teaching and learning takes place. If and when the concept of “quality” is focused on professors’ classroom performance, and on teaching and research as easily separable, then a narrow analysis — and flawed solutions — are likely to result.

The absurdity of numbers

Building on the same themes I discussed in “Proof of the pudding“, this post returns to the “completion agenda” in the United States and the role of for-profit colleges, the question of who is getting what out of higher education, and some issues with the concept of “human capital” as a driver of policy. Here is a link to the original post from February 20, 2011: The absurdity of numbers.

A number of recent posts on Inside Higher Ed have highlighted national (U.S.) debates on post-secondary policy and its relation to Barack Obama’s economic/policy plan. Obama has repeatedly emphasised the importance of education and research funding, even as the Tea Party have lobbied the Republicans to try to reduce funding. Meanwhile legislation has been introduced for the purpose of regulating private, for-profit career colleges, and it’s being battled every step of the way by the lobby groups associated with said colleges and by their political various allies.

All these developments relate in some way to the pressure to increase enrollments and “completion” rates—what some have referred to as the “completion agenda”—from post-secondary institutions. And that imperative is about developing a “knowledge economy”, so that the United States can remain competitive in the assumed global zero-sum game in which national prosperity is at stake.

In Canada, federal and provincial governments have taken up precisely the same strategy of pushing for more graduates, both in undergraduate and in graduate education (witness in Ontario the provincial Liberals’ goal to create 14,000 more graduate student spaces from 2002-3 levels, by 2010—see OCUFA, 2007).

Like others, I question the use of these kind of numbers as a means of gauging a nation’s success at, or progress toward, developing a sustainable “knowledge economy”. Human capital may be available, but this doesn’t mean that the “capital” will be put to use (i.e. that people, with their skills, will be able to find employment) in the immediate or near future. Are there sufficient job opportunities for those who make the “individual investment” in PSE, such that the investment will “pay off”?

The numbers conceal a potential over-production of graduates through the assumption that more college/university degrees automatically means more access to gainful employment for all those who graduate, as well as producing a more “innovative” workforce. (I’ve previously written posts about relative value vs. inherent value in education, and the policy implications.)

The focus on these numbers also hides the uneven quality of mass post-secondary education and the unequally shared burden of its increasing cost. For example, in the United States the for-profit career colleges often market to traditionally under-privileged groups who cannot access more prestigious institutions, but who ironically end up paying hefty tuition fees anyway—and finding themselves burdened with debt by the time their studies are over. It’s a debt they have trouble re-paying due to difficulties with obtaining appropriate employment after graduation.

Along with student “completion” comes the imperative to discover its causes, a search that has produced a whole range of new objects for measurement. One example is the project to measure levels of “student engagement” (gauged by the National Survey of Student Engagement, NSSE). Tests of student learning “outcomes”, and the development of standardised curricular goals, are also related to this process of environmental assessment.

Responsibility for failure must also be assigned, such as in this article where the author discusses reports that argue that “many American colleges are failing to graduate their students, at a time when the Obama administration and leading foundations are trying to ramp up the number of Americans earning a postsecondary credential.” So the university/college becomes a new target for critiques and for governmental interventions designed to ensure “quality” and positive “outcomes” for graduates.

In some ways, the obsession with numbers is really just a sign that education and its “products” are considered to be more important than ever—for their economic value—and thus they become, increasingly, sites of scrutiny for a plethora of “publics”, including not only governments but also parents, students, employers, and the media. But focussing on and rewarding outcomes, usually “completion” as either a proportion of the eligible age cohort or of the national adult population overall, means that institutions are more likely to implement “quick” technocratic fixes to what is generally a much deeper structural problem. Do we really need more graduates who are struggling to find work and to alleviate debts? How can we create a situation where these graduates are more likely to be solvent and employed upon, or shortly after, finishing their PSE courses?

A larger number of PSE graduates is only desirable, economically, if it produces the intended effect; but what we see instead could be an increase to the number of young people who are actually unable to participate fully in this economy even though they may technically possess the credentials for doing so. Unless this issue is addressed, the “production” of more PSE graduates is much less likely to benefit either the national economy or the individual graduates themselves.

Reference: OCUFA, 2007. Quality at risk: an assessment of the Ontario government’s plans for graduate education.

The proof of the pudding

Link to the original post from September 21, 2010: The proof of the pudding.

Throughout the first few weeks of September, we’ve seen a number of reports released, both in the U.S. and Canada, discussing and describing (quantitatively) the positive outcomes that students generate from obtaining university credentials. These reports have appeared at roughly the same time as the international university “rankings“, which were unleashed around the middle of the month–along with OECD education indicators and Statistics Canada reports on tuition fees and national education.

The strategy here seems straightforward enough; after all, at the beginning of the school year, it’s not primarily students but rather their parents–in many cases–who are concerned about whether the college or university experience is going to be “worth the investment“. (I would argue that the parents should also look to their own departing children if they want to know the answer to that question-!) It’s a great time to capture an audience for the debate, since students beginning their last year of high school at this time (most of them still living at home) will also be searching for relevant information about possible PSE options.

These articles are reports stir up the debate about public vs. private funding of PSE, about the rising proportion of university revenue generated by tuition from students and families, and the cost to the state of educational expansion. They also pitch university education primarily in terms of its economic value–not only to individuals, but also to the state (since educated people are “human capital”). Education correlates with increased income over one’s lifetime, with better health (saving taxpayer dollars), and with inter-generational class mobility. These arguments, along with those citing tough times for the government purse, are frequently used to support a pro-tuition-increase position both in the media and in policy debates.

All these points may seem valid enough until we consider the fact that while students may all technically pay the same amount in tuition (say, at a given university or in a particular program), they don’t all receive the same “product”. And universities generally advertise to them as if the same product is really on offer to everyone. Which it certainly isn’t–the costs alone (which exceed tuition) are borne in entirely different ways by different students, a point briefly raised by Charles Miller as quoted in this article. If my parents pay for my tuition and living expenses, then what costs am I absorbing over the period of a 4-year undergraduate degree? How does this compare to a situation without parental support? Low-income students are less likely to have family help and more likely to take on a large debt burden; they are less likely to have savings accounts and personal investments, less likely to be able to purchase cars and condos when their student days are done.

Aside from the variation in economic circumstance, students also bring differences in academic ability and social and cultural capital to their degrees, which means that development differs for each person and so does their overall capacity for career-building.

Not only does university have different “costs” for different people; it also has highly variable outcomes. Some students will land solid jobs and find themselves upwardly mobile after completing a bachelor’s degree. Others may continue to a Master’s or even a PhD and discover that gainful employment impossible to find, for a variety of reasons. There’s also the question of whether students obtain jobs in their chosen fields–or within a particular income range, for that matter. And once they do find employment, earnings differences by gender (for example) still persist to the extent that women in Canada still earn significantly less than what male employees take home for equivalent work.

Another form of quantitative justification, the rankings game is an attempt to make the intangible–the “quality” of education, or of the institution–into a measurable, manipulable object. Part of the yearly ritual is the predictable squabble over methodology, which generates much commentary and debate, particularly from those institutions that have found themselves dropping in the international league tables. This quibbling seems ironic given that all the rankings are embedded in the same general global system of numeric calculation, one that feeds international competition and now constitutes and entire industry that rides on the backs of already overburdened and under-funded university systems. While the public may rail against the supposed over-compensation of tenured professors (salaries represent the universities’ biggest cost), institutions continue to engage in the international numbers game, pumping money into the yearly production of “free” data that are then made inaccessible by the ranking organizations (who profit from their use).

Education reports, with their quantitative indicators of the economics “benefits” of higher education, are a part of the same overall tendency to assess, to compare, to normalize and standardize. Earnings-related numbers often provide rhetorical support for policy agendas that involve higher tuition fees, since proving the “private” benefits of education means that we can charge the user or “consumer” of education for access to these (eventual) benefits.

Rankings and statistics serve as a means of informing risk assessment–for governments, when funding is increasingly based on “performance”, and for students, when it’s about choosing the “better” university. But no numbers can truly gauge or alter the inherent risk of education and knowledge, the ineffability of the paths we take to discovery, the serendipities of fortune and temperament that can lead one person to the gutter while another may hit the heights of achievement. Students have moments of inspiration, they meet undistinguished professors who never publish but turn lives around. They form unexpected friendships and stumble on opportunities, skewer themselves on pitfalls both obvious and unseen.

In other words we cannot ac/count for this most joyful and painful side of our educative experience–the unknown element which is frequently the most formative one; and the more we attempt to inject certainty into this process, the more we set ourselves up for disappointment. This doesn’t mean there’s no use for numbers, for evaluations and assessments, for attempts to improve our universities. But sensible decision-making, whether by students or by governments, will always involve more than a measurement.