The uses of care

(Here is a link to the original post published on December 16, 2015 at University Affairs: The uses of care.)

Recently on Twitter and Facebook I’ve seen more articles on taking care of ourselves and the practice of “self-care” in academe, which makes a lot of sense at a time of year when (in the Northern hemisphere) the combination of colder weather, anxiety and exhaustion at the end of the semester—and the potential added stress of the holiday season—means that many academics and students are feeling worn out and in need of a break.

But when I see these articles and blog posts that take up the concept of self-care, I can’t help also thinking of (and comparing to) the articles from business publications that frame some of the same activities in a completely different way: from the viewpoint of employers, where our wellness is too easily seen as valuable only if it leads to improved productivity and an increase to the bottom line. These latter “advice” pieces are also regularly shared on social media.

In this post I’m going to look at the issue of how those very different “framings” overlap and intersect: what’s the connection between self-care and the “care work” that is done every day, often invisibly and without compensation? How does all this care work happen in the context of managerial governance with its imperative to productivity—and in a competitive academic culture? Is it also possible that “wellness” and related practices can work more in the interests of employers than employees, transferring the responsibility to change problems in the workplace and its culture?

For a start, care is work, as an entire body of academic research can attest; this work is also gendered, disproportionately performed by women. Women are already engaging in this “extra” work both in professional settings and in their personal lives, because it’s what’s expected of them. Tina Barnes-Powell and Gayle Letherby write that “[b]oth in the wider community and in the communities of higher education (whether provided by women or men), ‘care’ is feminized and undervalued.” Care work is often invisible and informal work, present and necessary but largely unacknowledged in everyday life.

For scholars who hail from groups traditionally marginalized in academe, (mutual) care is even more crucial, since working in the institution so often feels more like trying to work against it—both for themselves and for their students. That accumulation of daily experiences is a process of sedimentation, a psychological, emotional and physical burden generated by the structural gaps those scholars are expected to work to fill in themselves. If “diversity work” is also care work—work that can’t be done by committee or accomplished with a policy—it is beyond the logic of institutional rewards.

The concept self care used in this context has its roots in Black feminist thought, exemplified in the well-known words of Audre Lorde: “Caring for myself is not self-indulgence, it is self-preservation, and that is an act of political warfare.” As Sara Ahmed explains in her post on “Self-care as warfare”, this is radical because it involves caring for one’s self when the (social) world daily denies one access to that care. The elements of race and gender are key to the analysis, because of the performance of care for others that is expected of Black women, the discrimination they face, and the low value placed on their lives and work: “some of us, Audre Lorde notes were never meant to survive.”

Lorde’s work shows “how structural inequalities are deflected by being made the responsibility of individuals,” and we can see this pattern also in current discourses about workplace stress and mental health. For example, I wrote “Beyond puppies and yoga” to critique the tendency to individualize the “solutions” for the effects of systemic changes to (and problems with) how we live and work, including in academic institutions; in another previous post I discussed how this individualistic framing is reflected and reinforced in media coverage and advice columns about mental health. No amount of tending to the self can adequately compensate for the broader lack of access to mental health resources for those who need them, or indeed for the effects of discrimination and economic inequality on people’s health.

In the media articles we see now, this care for the self is re-articulated through the overarching, individualized logic of productivity. Examples show that a certain level of stress and anxiety is deemed to be fair and “natural,” but that we can manage it ourselves by taking control with advice and adopting appropriate practices. Naps, we’re told, are a good thing because they increase productivity (and sleeping close to your work has never been easier!)—while the debate about how much sleep we “need” rages on, because sleep, of course, is not (in itself) productive. Happiness itself can be calculated as another part of the equation that leads to more productivity. The Onion’s parody of the advice of “wellness experts” only works because we’re so familiar with the content and tone of said advice and the context in which it’s offered.

A useful example of this is the concept of “mindfulness” that is now regularly discussed in business magazines and mainstream media advice columns, having developed as a trend after gaining popularity in Silicon Valley. Zoë Krupka writes that “pasteurised versions of the ancient practice of mindfulness are now big business,” but they are about fixing “not so much what ails you, but what is ailing those who depend on you.” Using mindfulness as a tool to manage stress and increase innovation and productivity doesn’t get at the underlying problem, which is where the stress is coming from, why the work is so stressful, and how much stress people have to deal with (a few more critiques, if you’re interested, hereherehere and here). The strategic application of mindfulness is not a way to address overwork and unhealthy conditions in the workplace, and it places the responsibility for sustaining those conditions directly onto individuals.

These examples show how something that has a great deal of potential to be positive, can be reinterpreted through the lens of economization and enfolded in its logic. Mindfulness really does have benefits; and sleep, healthy food, exercise, and so on, are things we need in order to be healthy, things that improve our lives. It’s more the conflation of our lives and health with the needs of our employers that is potentially a problem, one that’s particularly prevalent in academe, but certainly not only there.

Robin James gets to the heart of this problem by making a distinction between “self-care as surplus-value producing work” or “resilience,” and “guerrilla self-care” (I recommend you read her post on this). The former, argues James, “is not about personal healing: resilient self-care is just another, upgraded way of instrumentalizing the same people” in the service of the same structures that caused them stress in the first place. It’s not “about cultivating what you need, it’s about adapting to dominant notions of success.” Guerrilla self-care, like the “subversive self-care” discussed here by Shanesha Brooks-Tatum, is a means of pushing back against destructive systemic problems as well as alleviating their effects on us.

This is why it’s important that we acknowledge care as work, which is a fundamental element of relationships and organizations, yet exists outside the “value” that matters in a market. The issue with care work, for the self and others, is not that it needs to be done at all but more that some people are expected to do it or compelled to do it—while others can take it for granted that the work will be done for them (by the institution or by other people in their lives). This is also why each of us needs to think through our unique position in relation to these institutions, their histories and their current priorities. Each of us will be able to do different things to contribute in this context. For example, learning to “say no” to extra uncompensated work is a lauded practice, but not everyone is in a position to do this without negative consequences. Those who can, need to make sure the work they say “no” to is not simply downloaded onto others who can’t refuse.

In this context it is radical to resist working on ourselves for the sole purpose of producing value for a “greedy institution,” in a competitive market for stable paid employment. Can care work become radical, resistant, in a system that attributes no value to it yet cannot function without it? Instead of (ironically) individualizing our problems and expecting people to deal with them through technologies of self-management, we need to acknowledge — and keep re-acknowledging — structural problems including those that create stressful, unhealthy workplaces. We also need to de-individualize our means of response so that the burden does not fall, as it has done and still does, on those already most affected by systemic injustices; otherwise we’re merely re-inscribing the things we claim to critique, both to our own detriment and to other people’s.

UBC, WTF?

If there’s a lesson to be learned from the recent events at UBC, it’s that silence can say more than words, whether you’re withholding information or telling someone else to keep quiet. That probably sounds obvious, but the university’s announcement of Arvind Gupta’s resignation—and its handling of the events that followed—reflect some problematic assumptions about who should be able to speak, when, and what should be said.

What was it that triggered UBC’s current public crisis? Gupta’s July 31 departure was announced publicly on August 7 in classic “Friday Afternoon News Dump” fashion: UBC published a news release, which was tweeted shortly after 4pm EDT. In a news release where roughly 50% of the text was devoted to celebratory prose about the incoming interim president (Dr. Martha Piper), UBC gave no explanation for Gupta’s resignation except that he had “decided he can best contribute to the university and lead Canada’s innovation agenda by resuming his academic career and leadership roles in the business and research community”.

Additionally, a Globe and Mail article was published around 5pm, containing interview quotes from UBC Board of Governors (BoG) Chair John Montalbano. Rather than clarifying the situation, this article only exacerbated the impression that the university hoped to bury the issue as quickly as possible. Gupta’s quoted comments—restricted by the NDAs that had been signed—were equally unhelpful, referring back to the university’s statement. Montalbano appears completely unfazed, stating “I don’t believe we will miss a beat”.

That article highlights what’s been so provocative about the UBC case, i.e. the “cone of silence” approach taken by the university’s administration, even as the BoG Chair seemed to have been saying quite a lot (more on this below). While there was a press release, it was immediately treated as an incomplete account because in the context of Gupta’s five-year term ending abruptly after only one year, the information UBC provided wasn’t “enough”. An (apparently) partial message suggests that there’s something to hide. This much should have been obvious at the outset, but UBC’s communication has remained unbendingly evasive; even their Twitter feed contained nothing helpful when I checked, beyond a single tweet with their press release on August 7.

Because of this suddenness and silence, public speculation began immediately. Why was Gupta resigning after only a year? Was it a health problem or some other personal issue? Was it a disagreement with the Board or opposition from senior administrators (the remaining ones, anyway)? The pressure of financial challenges? Gupta had no real experience in administration; was the position simply too much for him—or was he perhaps not living up to his promise? If it was the latter, one year seems like a pretty short trial period. If this was a “smouldering crisis”, it didn’t take long for the flames to be fanned—and it certainly wasn’t visible or predictable to everyone in the institution. UBC Faculty Association President Mark MacLean wrote in a public letter on August 10: “this news came as a complete surprise to me, and I have spent the weekend trying to make sense of it”.

UBC faculty members were among those who produced blog posts and columns offering their own interpretations of events (examples from the past few weeks include E. Wayne Ross, Nassif Ghoussoub, Stephen Petrina, Christopher Rea, James Tansey, and Charles Menzies). Which brings us to the second thread in this story. On August 9, Dr. Jennifer Berdahl—a full professor who holds the Montalbano Professorship in Leadership Studies: Women and Diversity in the Sauder School of Business—published a blog post about Gupta’s resignation, in which she described her “personal observations and experiences” with him. She placed these observations squarely in the context of her research on diversity and workplace dynamics.

Of Gupta, Berdahl wrote that “he exhibited all the traits of a humble leader: one who listens to arguments and weighs their logic and information, instead of displaying and rewarding bravado as a proxy for competence”; and that “UBC either failed in selecting, or in supporting, him as president” (a position she wasn’t alone in holding). More controversially, she described the culture of leadership at UBC as a “masculinity contest” in which Gupta did not fit, and where his strengths were not sufficiently valued.

Berdahl’s post soon came to the attention of BoG Chair John Montalbano, who went so far as to express disapproval to her in a phone call the day after it was published. Montalbano, who is CEO of RBC Global Asset Management, also happens to be the donor whose funds support Berdahl’s professorship. According to Berdahl, he chastised her for bringing negative attention to the Sauder School and UBC, describing her words as “hurtful” and “unfair to the Board” and repeatedly mentioning both the RBC funding and related conversations that he was having with other administrators. This was followed by further communications from Berdahl’s Division Chair; the Associate Dean of Faculty; and perhaps most ironically, the Associate Dean of Equity and Diversity. Their message was clear: the blog post had done “reputational damage” and was upsetting to a powerful donor who was also Chair of the Board.

Berdahl’s account of these experiences, which she posted on August 16, brought a whole new dimension to the UBC situation. What she described was an unequivocal breach of academic protocol, and it generated outrage far beyond UBC and beyond the group that had initially been concerned about Gupta’s resignation. It also changed the focus of the story and helped to further position Montalbano as the chief villain in it. Even those who were more sanguine about Gupta’s departure and/or had viewed Berdahl’s earlier post with skepticism, were happy to leap to her defence over an issue of academic freedom.

Berdahl’s experience has raised again a key issue with regards to the definition of academic freedom: should professors’ commentary be limited to their “area of expertise” or to what is required for teaching and research, or should it be applicable to more general matters of university governance? Even for those who think that comment should be limited to a faculty member’s research area, Berdahl’s position is unique this regard; her research is in fact about organizational dynamics. Surely then she is qualified to speak critically about the dynamics in her own institution, based on what she’s observed first-hand? The post states fairly clearly that Berdahl is speaking from her own experience and framing this through the theoretical lens that she uses in her work. This approach was of course criticised for a variety of reasons, but being critical of what someone said is not the same as telling them to stop saying it.

I wasn’t hugely surprised at the points Berdahl was making, because the gender issue here isn’t a new one. It’s a point I’ve seen raised, usually off-the-record (and not by women), during the course of my dissertation research. It’s something that Julie Cafley of Canada’s Public Policy Forum, who wrote a dissertation on Canadian universities’ presidential departures, is also pointing out as significant. Another factor to keep in mind are the gender dynamics of public expertise, which favour a particular performance of masculinity (one that intersects with perceptions relating to race). So is there not a connection between these issues and the points raised in Berdahl’s blog post on this topic? Why were so many people—university faculty included—so quick to dismiss the legitimacy and relevance of what she said, along with her right to say it?

For some people, the problem was the quality of the writing and the analysis in the post; it wasn’t written either with the rigour of an academic article, or the clarity of a post intended for a broad audience. Others disagreed with the conclusions indicated therein, which were interpreted as accusations of racism and sexism. But if the question here is whether the post was covered by UBC’s existing policies on academic freedom, to me it looks like the answer is “yes”.

That’s why, whatever Berdahl’s analysis pointed to, in his reaction to it Montalbano stepped over a line that would have been clear to anyone familiar with academic work and the policies that govern it. The outcome was that after denying the allegations, Montalbano still faced public pressure to step down as BoG Chair—which he did, on August 25. Former B.C. Supreme Court Justice Lynn Smith will “undertake [a] fact-finding process” on the incident, to culminate in a report by October 7. Meanwhile, UBC has provided no further information about Gupta’s resignation, which clearly hasn’t stopped major media outlets from publishing further commentary.

I can see at least two stories being told here: one of them is about accountability, and the other is about academic freedom. They’re both stories about the ethics of (crisis) communication—on the one hand, a major, sudden change occurred and not enough information was provided. On the other hand, when a faculty member wrote a public interpretation of that change, she was shushed by the BoG Chair and others.

Accountability is significantly about communicating with those who have an interest in the outcome of a situation. Even when there’s information that for legal reasons can’t be disclosed, there are ways of handling it appropriately. The rampant speculation (and subsequent calls for transparency) should have been entirely predictable given that UBC is one of the country’s top universities, that there was widespread publicity about Gupta’s appointment (and presidential searches cost money), and that the resignation happened after just 13 months. Those gaps left between expectations and actual communication were filled in with assumptions generated by context: that something very bad must have happened, since no-one could talk about it. Would things have turned out differently had the university taken a different path at the outset, or are the rules governing such situations inherently troubling for public academic institutions?

Academic freedom, too, is a communication issue as well as one of intellectual integrity; there’s a reason it’s so often conflated with “freedom of speech”. It’s what professors are saying—what they’re communicating and to whom—that’s often framed as a (political) problem, as was the case with Jennifer Berdahl’s blog posts. This reaction to her words only confirmed the initial impression that something worth hiding must have happened, since a faculty member was being pressured to tone it down.

What will happen next at UBC? In an August 9 post at Inside Higher Ed, Kris Olds wrote that “a crisis is a wonderful teaching and learning moment. Use it, and be prepared to see it used, for this is what a university is all about”. Only time will tell whether the lessons from this crisis will be put to good use. UBC will need to tend to reputational damage, but even more so, the damage done to internal trust within the university. One sign of how the university plans to proceed is provided in Martha Piper’s op-ed in the Vancouver Sun. Piper’s piece, whether you agree with her perspective or not, is probably better written than anything else produced by UBC representatives during the past month; but it’s clear that the university is trying to maintain the same upbeat tone that failed so badly at the outset. If (as some have argued) there’s a deeper, ongoing problem with the culture of governance at UBC, it’s going to take not only time but also some honesty to address it appropriately.

Beyond puppies and yoga

While the past decade has brought a great deal of discussion about mental health in Canadian higher education, these issues are usually framed and discussed as individual rather than systemic, and as problems that PSE institutions should be able to resolve with more supports. But universities and colleges can only do so much, and when services and supports are radically under-funded in the broader health system, we are failing not just students but everyone else who relies on the public system for care. Here is a link to the original post published on January 14, 2014 at University Affairs: Beyond puppies and yoga.

Last October was Mental Health Awareness Month in Ontario (October 10 is World Mental Health Day), and as part of the province’s mental health and addictions strategy, there was much fanfare over the launch of new initiatives for postsecondary students bolstered by $27 million in funding. This is an important and positive step, because there’s been an increasing demand for the limited support services available on campuses, and the problem has been worsening for at least a decade.

Unfortunately, what students experience is part of a much bigger problem. Universities and colleges, as much as they may try, cannot plug the yawning gap in our system that is an issue far beyond the campus. There are many people in Ontario who need help with mental health issues and may be seeking it actively – but can’t get it. Why? Because the system is reactive. It’s designed to deal with short-term problems and with extremes and crises, rather than to help us prevent them, or help us to live with long-term conditions. This matters because ultimately, the services on campuses have to mesh with off-campus services in or connected to the healthcare system.

This is a system in which, without a formal diagnosis, you cannot gain access to accommodations in postsecondary institutions (or elsewhere). Yet to obtain this diagnosis, you have to find the right way in to the system and obtain the right help once you get there. The process can take anywhere from a few months to a year (or longer), depending on how much you know and whether you have an advocate.

For example, an assessment for learning disabilities costs $1,500 to $3,500. Some universities have assessment services, but these refer students to external testing (some of which may be covered, depending on circumstances). You still have to be a registered student to access these, or to have costs partially covered through student loan eligibility; otherwise, you or your parents will be paying. If you’ve had to de-register because of your problems, then you’re out of luck. The same goes for therapy; talk clearly isn’t cheap, in fact it costs $80 or more per hour unless you can use university counselling services – where there is a limit on the number of sessions each student can access. All this is based on the assumption that problems will be short-term and can be “fixed”; wait times for long-term services are often very lengthy.

Of course if you have the resources available, you can simply buy what you need. You can see a therapist of your choice, without waiting months to be told whether you are eligible. You, or your family, can pay for expensive assessments so that problems can be uncovered and named, and help can be obtained. The more fortunate students don’t need most of the university’s services and also don’t have to rely on the government, because they have other forms of support.

Clearly it’s still the disadvantaged students – and less-privileged people in general – who are falling through the cracks in this system. We need to ask, who receives the necessary supports and who does not? Who can step forward and say “I think I have a problem”, without fear of repercussions? Who has the resilience and stamina to pursue a solution that can take so long, and can be so draining, both to discover and to put into practice?

The current system continues to privilege not just people with existing resources but also those who are secure enough to speak about the unmentionable, in spite of the lack of awareness that even those who suffer from such problems may experience themselves. For example, the Council of Ontario Universities held a competition to encourage students to come up with the best mental health “social media strategy”. But the best strategy would be a collective one, informed by (and actively soliciting) the input of those who cannot or will not speak in the public eye. The best process would actively seek out criticism from those most affected, not just the more easily marketable solutions.

University initiatives that gain the most positive media attention often conflate short-term, seasonal stress relating to events like exams, with long-term problems like clinical depression and anxiety disorders (as well as focusing on undergraduate students). Yet it’s the exam period “puppy rooms” that make the news, not the underlying issues that are so much harder to address and resolve, like wait times for “assessments” at university counselling clinics, the lack of privacy many students feel when they go there, the difficulty of having to describe one’s situation repeatedly in the process of trying to find help, and the exhaustion produced by having to negotiate (with) a bureaucracy while simultaneously dealing with the effects of one’s condition.

Giving attention to answers that work well in a PR pitch means depoliticizing our context, and this is a serious mistake. It makes it too easy to forget about all those gaps in the system, and also about factors like poverty, abuse, and discrimination based on race, gender, disability, sexuality, and nationality; it makes it easier to individualize both the problems and the solutions, reducing the answers to “lifestyle choices”. It means we downplay the context in which students are living their lives, and how they bring this to the university when they step onto the campus. That context is part of what enriches teaching and learning, but it also has to be addressed in terms of the problems students experience both on- and off-campus, and how we can help them. Universities alone can’t fix these systemic problems, but perhaps they can bring attention to them, and that would be a great start.

Degrees of certainty

I wrote this post about the way the “skills gap” discussion is informed by the politics of funding and the increased amount of risk that universities are expected to manage. Here is the original post from March 27, 2013: Degrees of Certainty.

recent post by David Naylor, the President of the University of Toronto, has been quite popular with academics and has generated a lot of commentary. Naylor makes the argument that Canadian higher education is dogged by “zombie ideas”, and he describes two of them: the first is that universities “ought to produce more job-ready, skills-focused graduates [and] focus on preparing people for careers”. The second is the idea that research driven by short-term application or commercialization, should be prioritized by universities because it provides a better return on governments’ funding investments.

I focus here on the first point, since in the past few weeks, in the run-up to the federal budget on March 21st, there has been a great deal of coverage of the alleged “skills gap” in in the Canadian workforce. Others have already done the work of summarising this issue, but as a quick recap, the argument goes something like this: business leaders and employers in Canada complain (to the government) that they cannot fill positions because candidates lack the skills. Yet Canada produces more post-secondary graduates than ever, and those grads are having trouble finding employment that matches their qualifications. So why is there an apparent “mismatch” between the education students receive, and the skills employers are demanding?

I don’t have anything to add to the debate about what is needed more–“narrow” skills such as those available from colleges or apprenticeships, or the “broader” education that universities argue they provide–because I don’t have the expertise to make an assessment within those parameters. However, I find the discussion interesting in terms of its context, including who is doing the arguing, and why.

For example, while the “skills gap” is assumed as a dramatic fact by Federal Human Resources Minister Diane Finley, who “recently called the labour and skills shortage “the most significant socio-economic challenge ahead of us in Canada”” (CBC)–other experts, including Naylor, disagree that a skills gap exists at all. University graduates, they argue, are still making better money than those without degrees; and most of them (eventually) find jobs that draw on their skills–so why reduce the number of enrolments? Alex Usher of HESA has been generating a lot of commentary for this side of the argument as well; in the comments of one of his posts, his points are disputed by James Knight of the Association of Canadian Community Colleges.

Clearly the debate is more complex than “BAs vs. welders”, but this is the rhetoric being reproduced in numerous mainstream media articles. The average reader could be forgiven for finding this issue hard to untangle, based on the radically different accounts provided by media and policy pundits. Yet all this is discussed with much urgency, because post-secondary education is now being understood as a stopgap for everything the economy seems to lack–and economic competitiveness is imperative.

The politics of urgent “responsive” decision-making lie behind many of the arguments being brought forth. The skills gap, should it exist, has its political uses; agreeing that a thing exists means having to find ways of dealing with it somehow. In this case, a restructuring of university education is one solution on offer, including steering students away from the corruption of the arts and humanities and towards more suitable areas where demonstrable “skills” are in demand. Those doing the arguing have the means and “voice” to define the problem in a particular way; they can intervene in that debate and someone will listen. Each player has stakes in this game, too–the colleges plump for skills and job training over research investments, while the universities, and their advocates, claim a “broad” education is more appropriate; employers want graduates they don’t have to train, so the concern is with graduates being job-ready (for jobs that may not even exist yet).

Is this a kind of moral panic for Canadian higher education? That’s an important question, because such tactics are used to create a climate in which particular policy changes are favoured over others, both by politicians and policy-makers and by voters.

I think at the heart of the debate there are the problems of risk, certainty, and value (for money). Canadians have more of a “stake” in what universities do–often through directly paying ever increasing amounts of money for it–and so they care more about what universities are for. Governments have more of a claim now too, because of the idea that universities are magic factories where students enter undeveloped and emerge brimming with human capital (but it must be capital of the right kind).

The more we experience instability, the more we desire certainty–or at least some form of guarantee that if things go off the rails, we have other options. Yet there is no certainty about economic (or other) outcomes either from education or from non-commercial, “basic” research. Education and research give us no way to “go back”, either. For those trying to get a good start in life, there’s no tuition refund if we fail our classes or find the job market unfriendly at the end of the degree. We can’t wind back time and have another try. So the question becomes: what will guarantee our ability to cope with the future? A long-term focus on broad learning, which can (it is argued) help us to adapt to the changing structure of careers? Or a short-term focus, on skills designed to prepare students for specific, immediate positions?

This is why Naylor makes the argument that “the best antidote to unemployment–and the best insurance against recession-triggered unemployment–is still a university degree” (added emphasis). The word “insurance” speaks to the risk each person internalises in the current economy. Such risk has many effects, and one of them is heightened fear of the unknown: with so few resources to go around, will we get a “return” on what we invested, will our sacrifices “pay off”? What will happen if they don’t? As Paul Wells has pointed out, university advocacy organizations such as AUCC have pushed for universities to be recognised as providing economic benefits–since this is a logic that validates requests for further government funding. Yet it means universities are held captive by their own argument, since funding comes with the expectation of economic returns for the government. What if they cannot deliver on this promise?

The skills/employment “gap” is being blamed for a lack of national economic competitiveness; and it is a parallel to the ongoing “innovation problem” that Canada has in the research sector. But it’s the outcome, not the process, that’s really driving this debate. Never before have we been compelled to pay so much attention to the purpose and results of university education, and now that it seems to matter so much, we’re finding that “what universities should be doing”–or even what they already do–can’t be pinned down so easily; it can’t be mapped so cleanly onto a specific, measurable result. This is partly because what we now demand of universities is certainty, where serendipity used to be enough.

War of attrition – Asking why PhD students leave

After finishing up a bibliography of sources on graduate education, I wanted to write a post about some of the things I’d read on the topic. Because there had been recent articles about attrition and supervision, in this post I point out the link between them, citing some of the literature on PhD non-completion and its relationship to factors like academic and social integration, professionalization opportunities, and support/mentorship from faculty members. The original post is from July 17, 2013: War of Attrition.

The Times Higher Ed in the UK had a hit this past week, regarding the issue of doctoral supervision, with an article by Tara Brabazon titled “10 truths a PhD supervisor will never tell you”. Worth noting alongside that one is a recent article by Leonard Cassuto that appeared in the USA’s Chronicle of Higher Education, regarding doctoral attrition, which has long been notoriously high (at least in the United States – an average of around 40-50 percent). Attrition rates in Canada are, as far as I know, not generally available though some numbers from eight of the “U-15” were published in this article from Margin Notes blog (and a longer discussion of completion rates and times to completion is here).

I mention these two issues together because for my dissertation I’ve been going over the research on PhD supervision and attrition, including the work of Barbara Lovitts (who’s cited by Cassuto as well), Chris Golde, and Susan Gardner among others. This research shows clear connections between supervision styles, departmental “climates”, professionalization opportunities, “student satisfaction”, and the outcomes of PhD study – including attrition.

What necessitates this research is that there are long-held misconceptions about the causes of non-completion. A key finding is that often faculty attributions of student non-completion have looked very different from either the students’ understanding of their experiences (or of what other students experience), or from the reality of their reasons for leaving. Since those who leave don’t generally get to tell their stories, assumptions can be made that they simply “didn’t have what it takes” or that the admissions committee didn’t “select” the right candidates for the program. Not only does this download the blame onto the individuals who leave, but it also masks other entrenched problems that can then continue without serious examination. Additionally, it doesn’t mesh with research that’s shown the non-completers tend to look just as “prepared” for academic work as the students who finish.

While there is no single reason why students tend to leave (in fact it’s usually a combination of reasons), a major take-away from the scholarship on this topic is that the supervisory relationship is of crucial importance – not only in whether students graduate, but also in their subsequent (academic) careers. For example, Lovitt’s book Leaving the Ivory Tower confirms that supervisors who have already helped PhD students to complete are the ones most likely to continue doing so. However, the reasons are complex. These supervisors tended to have a give-and-take relationship with students rather than expecting the students to do everything on their own. They “scaffolded” and supported their supervisees, and cared about students’ intellectual development and overall well-being; they facilitated the students’ professionalization and their academic and social “integration” into the department and the discipline, through a variety of practices.

If there are no exit interviews with those who leave their programs, then it’s much easier to continue making erroneous assumptions about why they left in the first place. This is important because there are significant policy implications for the reasons we assign for attrition. For example, even Cassuto’s article places emphasis on selection of the “right” types of students, and on certain types of student responsibility such as seeking out the department’s attrition rate before applying – though this is not information that programs tend to provide to potential students. His taxonomy of students doesn’t include those who simply don’t know what support they will need, and don’t end up receiving it; it doesn’t include those who had the capacity to complete but were abandoned by their supervisors, sabotaged by departmental politics, or derailed by personal life circumstances. All these factors are discussed in the literature on PhD attrition.

Like most other issues in education there are many causes for problems with completion. Any relationship is a two-way street, as pointed out in this post by Raul Pacheco-Vega. There are plenty of faculty who are already engaging in the helpful practices described by Lovitts and other researchers, as well as PhD students who don’t put in enough work, or who probably shouldn’t have chosen to start the degree in the first place. But when it comes to implementing solutions, the nature of students’ supervisory relationships should be one of the primary targets of inquiry and intervention.

An example of an important issue that could be addressed is that of responsibility. Reasonable student expectations of faculty should be made clearer, and tacit institutional and professional knowledge – which is so crucial to students’ success in graduate programs – must be made explicit rather than being left to students to discover for themselves. If students understand what they should expect from a good supervisor – and for what they are responsible themselves – they may be able to make a more informed decision about this important working relationship (and whether in fact it’s working at all).

In some cases, this kind of change will take time and a great deal of consideration because if we take the research seriously, the problems extend beyond merely asking professors and students to engage more often in certain practices. They may be problems with the culture of a department or program, or in fact (considering some of the comments from Lovitts’ interviewees) the nature of academe itself, which is where we have to ask ourselves – what kind of a university do we want, and what kind of faculty will be working there? For example, if students also listed “personal problems” (as many of them did) including stress on existing relationships and the demands of raising children, does this mean those who desire a more balanced life will be inherently unsuited to academic work?

PhD students’ “dissatisfaction” should not be dismissed as merely the whining complaints of the academically inadequate. When students don’t know what to expect, they don’t have the opportunity to align their decisions and behaviour with the appropriate expectations; when they don’t receive adequate support, they may not know how to get what they’re missing, or indeed that they’re missing something in the first place (until it’s too late). Not only that but if we ignore these issues, do we not face a reproduction of what may be the worst aspects of academic life, in the name of “trial by fire”? Those who “make it through” are often assumed to have some inherent set of qualities that make them a better “fit” for academic life. But closer attention shows that this clearly isn’t the case, which means – even if the attrition rates are lower in Canada – we need to seek out appropriate explanations.

MOOCs, access, & privileged assumptions

In this blog post I compare the rhetoric of accessibility that occurs in arguments for MOOCs, to the kinds of examples chosen to represent this – in the context of an existing literature on higher education accessibility. Here is the original post, from June 19, 2013: MOOCs, access, & privileged assumptions.

Later this week I’m going to be on a panel about the inescapable subject of MOOCs, so for this post I’m thinking through an issue I’ve been noticing since I last wrote a big post on this topic, which was during the peak of the media mayhem in July 2012. For many of those researching higher education, even those who’ve been doing it for just a few years as I have, the ongoing hyperbolic MOOC debate that has hijacked the higher ed news has been quite frustrating. Of course, there is plenty of bluster on both sides of this debate. But it’s really troubling to see many perfectly legitimate criticisms reduced to straw-person arguments about “faculty fear” (“those teachers just don’t want to lose their jobs!”), or about how those who are skeptical must be “against accessibility”.

So I would like to address this issue of “accessibility” that has come up repeatedly in MOOC debates. In articles that evangelise about the benefits of MOOCs, it’s often pointed out that there there is a huge (global) demand for higher education and that many eligible students are losing out due to lack of resources or to their location in “third world” countries. Even in richer nations, student loan debt has become a more significant concern over time, alongside rising tuition; and postsecondary education is becoming more of a financial burden for those who can least afford it. All this has happened in a context where the economy has changed significantly over a period of about 30 years. Socioeconomic mobility has been stymied (including for those with education), middle-class jobs are being fragmented and technologised, and young people are finding it more and more difficult to get a foot in the door. This is the “perfect storm” often referenced in arguments for the “urgency” of turning to MOOCs as a solution.

Lest you should think I am blowing proponents’ claims out of proportion, I’ll provide a few examples. Take a look at this recent article in the Guardian UK, by Anant Agarwal of MIT, President of edX. Agarwal claims that MOOCs “make education borderless, gender-blind, race-blind, class-blind and bank account-blind” (note the ableist language – and the fact that he left disability off the list). Moving on, in this article from the Chronicle of Higher Ed, Mary Manjikian argues that MOOCs (and other forms of online learning) “threaten to set [the existing] social hierarchy on its head” and that we should “embrace the blurring of boundaries taking place, to make room for a more-equitable society”, which can be achieved through the dis-placement of elitist place-based education. And lastly, I point you to an article written by a MOOC user who epitomises the claims to worldwide accessibility that Agarwal so keenly puts forward: Muhammad Shaheer Niazi of Pakistan, who, with his sister, has taken numerous MOOCs and writes enthusiastically about the benefits of online learning.

I think these arguments beg the question – if MOOCs provide “access”, who, then, has access to MOOCs? What is required of the user, to get the most out of these online resources? To start, you’ll need a regular, reliable Internet connection and decent computer equipment, which are of course not free. Assuming you have the right tech, you’ll also have to be comfortable with being tracked and monitored, given that surveillance is required to “prove” that a particular student did the work (there is much potential for cheating and plagiarism). There are also “analytics” being applied to your online activities, so you need to be on board with participating in a grand experiment where the assumption is that online behaviour shows how learning happens. In these “enclosed” MOOCs, there will be no private, “safe” spaces for learning.

And learning itself must fit the parameters of what is on offer – so the kind of “personalization” often touted is a rather limited one. You’ll be fine if you learn well or best at a computer, and if you don’t have any learning (or other) disabilities that require supports. The few demographics available also suggest that thus far, MOOC users are more likely to be male, white, to have previous postsecondary education, and (judging by course offerings) to be speakers of English, even while the actual pass rates for the courses are still proportionally very low. In terms of the actual needs of the majority of students, we should consider whether all this is really about privileged autodidacts projecting their ideal of education onto everyone else.

Questioning the quantification of assessment, the level of access, the cost of tuition, the endless search for “economies of scale”, and the funding troubles faced by public higher education, must happen if we are to find solutions to those problems. Yet plenty of people have been questioning these trends for a long time, and somehow the research they’ve produced doesn’t have the same appeal. Pro-MOOC critiques of the current system never seem to reference the existing literature about (for example) neoliberalism and the economization of education policy, increased privatization (from tuition fees to corporate influence on research), marketization and commercialization, and the unbundling and outsourcing of faculty work. Perhaps that’s because MOOCs would mostly serve to exacerbate those trends.

What then is the function of MOOCs in terms of “access”? It isn’t about extending real opportunities, because we live in a society and economy where opportunities are unequally distributed and even (online) education cannot “fix” this structural problem, which is deepening by the day; finding a solution will be a complex and difficult task. It isn’t about ensuring the students get higher “quality” of teaching, unless you truly do believe that only professors at elite universities have something to offer, and that all other faculty are somehow a sub-par version modelled on that template. Some have argued that MOOCs can reduce tuition costs for students, but surely there’s only so long a business can exist without making a profit, and the “product” clearly isn’t the same. The ongoing efforts to link MOOCs to the prestige of existing universities through accreditation deals are unlikely to leave these courses “cost-free”, and the hundreds of hours of work it takes to create one MOOC can’t go uncompensated.

Perhaps MOOCs in their revisionist, start-up incarnation are partly about projecting the possibility that even the most downtrodden can still do something to get ahead, at a time when the old path to mobility through hard work and (expensive) education seems less effective than ever. What could be better than more education, for “free”? In this sense, MOOCs really do help to “train” workers for the new economy, since they’re teaching us to govern ourselves, to be autonomous and flexible learners in an economy where businesses can simply refuse to provide on-the-job-training, instead holding out for the perfect custom candidate (while keeping wages low). This is framed not as a problem with business – or even with the long-term changes to the economy in general – but as a failure of education. Meanwhile, we’re encouraged to believe that we can mitigate personal risk by investing in ourselves, and if we don’t “get ahead” that way then it’s about personal responsibility (not systemic problems). If MOOCs “level the playing field” then no-one can complain when they’re left out of the game.

Who is most desperate for these possibilities? Maybe those folks will be the ones using MOOCs. But will the possibilities materialise into something real, for those who need it most, and not just for the few example “learners” who are invoked in MOOC-boosting articles and speeches? Are most current users there because they need to be or because they have no other option? Would massification through MOOCs be more effective that any of the other forms of educational massification that we have seen over the past 200 years – and if so, why? In what way will the new tokens of achievement be any better than a university degree at present, and will they translate concretely into opportunities for the least privileged? After all, isn’t that what “access” is about?

A deeper understanding of context is relevant to every argument being deployed. To return to Muhammad Shaheer Niazi, it’s clear that he actually exemplifies why we cannot make sweeping generalizations about students based on their location. Niazi describes how he had “access” to a supportive and education-oriented family; to “a very good school in Pakistan”; and to computers and books in his home. As Kate Bowles and Tressie McMillan Cottom have both pointed out, there are many families in the United States who wouldn’t be able to provide this kind of environment, and yet “Pakistan” is used frequently as a signifier of poverty, inaccessibility, and general disadvantage. Niazi’s piece shows us he is far from desperate – he is in fact part of the small international group of gifted and well-resourced students that universities most desire to recruit.

Because of the claims being made about disrupting hierarchies and helping the underprivileged, the MOOC trend calls on us to ask ethical questions. Questions about control, resources, and agendas; questions about who is excluded and who is included in this “new” landscape. Questions about how the story of this “phenomenon” is being re-written and re-shaped to reflect particular priorities. We’re seeing perverse exploitation of arguments about access, when the “solution” proposed involves breaking down the university into commodifiable, out-sourced units and reinforcing (or even exacerbating) existing institutional and social hierarchies. In the current political/economic landscape, where there are so many problems that seem intractable, the apparent concreteness of the MOOC “solution” is part of its appeal and also part of why uptake at traditional universities has been so rapid and widespread. But MOOCs are an answer that can only be posited if we construct the question in the right way.

Risk, responsibility, and public academics

This piece addresses the way that early-career academics feel encouraged to engage in public or interactive communication, yet find that the professional assessment of these activities is still fairly low – and that the professional “risk” isn’t the same for everyone. It was re-posted on the LSE Impact Blog, titled “More attention should be paid to the risks facing early career researchers in encouraging wider engagement”. Here is a link to the original post, from July 3, 2013: Risk, responsibility, and public academics.

As my last academic event of the season, I attended Worldviews 2013: Global Trends in Media and Higher Education in Toronto on June 20th and 21st. I’m not going to write about the panel in which I participated (“Who are the MOOC users?”, with Joe Wilson, Aron Solomon, and Andrew Ng), since I’ve already spent enough time thinking and writing about that issue of late. But there was another very interesting theme that I noticed coming up throughout the conference. In a number of the sessions I attended, I heard emphasis being placed on the need for researchers and academics to communicate more with publics beyond the specialist audiences that have, until recently, been the norm.

This language of “engagement” has been taken up ever more enthusiastically by funding agencies and universities, often alongside the concept of “impact”, the latter term having already become influential (and embedded in the logic of research governance) in the UK. However, in all this talk about “engagement” and public communication it seems that less attention is being given to the question of which academics participate in this process – who can make use of the opportunity to “engage”, and why.

For a start, it’s somewhat disingenuous to discuss the “responsibility” for academic public engagement without considering the risks that this involves, and for whom that risk is most significant  – i.e. most likely those already in marginalized positions in the institution and in society. The point about risk was not addressed explicitly in discussions I heard at the conference. In spite of the rhetoric about “impact”, the fear that many graduate students and early career researchers (ECRs) feel – and the anecdotal evidence of folks being told not to get involved in certain kinds of activities – suggests that “engagement” must happen on terms explicitly approved by the institution, if those involved are seeking academic careers. Grad students are not generally encouraged to become “public intellectuals”, a concept that regularly provokes critiques from those both within and outside the academy.

Not only was risk left out of the picture, but the discussion wasn’t adequately placed in the context of increasing amount of non-TT labour in academe. Those not fortunate enough to be on the tenure track still want to be (and are) scholars and researchers too; but it’s harder for them to contribute to public debates in the same way because they don’t tend to have a salary to fund their work, or a university “home base” to provide them with the stamp of academic credibility. I noticed at one panel there was also a discussion about tenure and academic freedom, and the argument was made that profs with tenure don’t speak up enough, given the protections they enjoy. Again, I think the more interesting question is about who gets to speak freely, with or without tenure, and why. Do all tenured faculty get to assume the same kind of “freedom” that someone like Geoffrey Miller does (or did)? What will happen to such freedom when the work of academics is further “unbundled”, as with the growing proportion of low-status contract faculty?

Blogging of course falls into the category of “risky practice” as well. Writing a good blog post actually takes time, effort, practice, and a lot of thought. But what’s interesting, and perhaps predictable, is that blogs were dismissed as not credible by at least one participant during a Worldviews panel that was about the future of the relationship between higher education and the media. In fact a specific comment referred to ECRs “trying to make a name for themselves” through social media, as if this is merely a form of shallow egotism as opposed to a legitimate means of building much-needed academic networks.

This seems particularly short-sighted in light of the intense competition faced by graduate students and other ECRs who want to develop an academic career. To suggest that ECRs are simply using tweets and blogs as vacuous promotional activities is an insidious argument in two ways: firstly because it implies that such tools have no value as a form of dissemination of research (and development of dialogue), and secondly it invokes the idea that “real” academics do not have to descend to such crass forms of self-aggrandizement. Both of these points are, in my opinion, simply untrue – but then again I’m just “a blogger”!

If universities are going to help educate a generation of researchers who will cross the traditional boundaries of academe, they will need to support these people in a much more public way – and in a way that will be reflected by the priorities of departments and in the process of tenure and promotion. Yes, we have the “3-Minute Thesis” and “Dance Your PhD“, but not everyone enjoys participating in this competitive way – and myriad other forms of public, critical engagement may be less well-accepted. Universities may make the claim that they value such forms, but who other than well-established researchers would be willing to speak up (especially about the academic system itself) without the fear of making a “career-limiting move”?

Those starting out in academic life need to receive the message, loud and clear, that this kind of “public” work is valued. They need to know that what they’re doing is a part of a larger project or movement, a more significant shift in the culture of academic institutions, and that it will be recognized as such. This will encourage them to do the work of engagement alongside other forms of work that currently take precedence in the prestige economy of academe. Tenured faculty are not the only ones with a stake in participating in the creation and sharing of knowledge. If we’re looking for “new ideas”, then we need to welcome newcomers into the conversation that is developing and show that their contributions are valued, rather than discouraging them from – or chastising them for – trying to participate.

The economics of learning

I wrote up a few comments about a conference held at OISE in February 2012, where the discussion of potential new universities in Ontario often returned to a recommendation that the province create teaching-focussed institutions. This separation of teaching and research doesn’t take into account – among other things – the lack of prestige attached to teaching work in universities. Here is a link to the original post from February 17, 2012: The economics of learning.

Last week on February 7, a conference was held at the Ontario Institute for Studies in Education at the University of Toronto (OISE) on the subject of the new universities (or campuses) that have been proposed by the Ontario provincial government. The conference included speakers who discussed various issues relating to the creation of the new campuses, and there was also a particular focus on ideas put forth in the book written by Ian D. Clark, David Trick and Richard Van Loon, entitled Academic Reform: Policy Options for Improving the Quality and Cost-Effectiveness of Undergraduate Education in Ontario. Though unable to attend the conference in person, I was able to watch it live from home via OISE’s webcasting system; since I’ve been following this issue for a number of months, I thought I might share some comments.

As conference participants discussed, there are many possibilities for what new “teaching oriented universities” might look like. The question is, what’s the context of their creation and what actual forms and practices will emerge? What kinds of “campuses” will these be, and what logic will drive their governance? For example, will they be like the liberal arts colleges of the United States where prestigious faculty engage in teaching while also producing research? My guess is that the answer is “no”, because this would conflict with the need to save money by significantly increasing teaching assignments per professor, which — as it turns out — is the goal.

One thing I felt was a bit lacking at the conference was discussion of the fact that in the broader “academic economy” teaching is simply considered less prestigious than research, and that means a hierarchy of institutions is likely to emerge. In a differentiated system, universities will tend be different, but not “equal.” What will be the implications of this for the new universities, for the hiring of teaching staff (for example)? Will faculty hires see these institutions as less desirable stops on the road to a “real” university job at a research-oriented university? I believe one speaker, Tricia Seifert of OISE, did address this problem by suggesting (among other things) that we should do more during PhD education to privilege teaching and to build the prestige of pedagogical work in the academic profession.

A related point is that in Drs. Clark, Trick and Van Loon’s model, there seemed to be an assumption that teaching quality operates in a simplistically quantitative way (behaviourism never really goes away, does it). A 4-4 teaching “load” (80% teaching, 10% research, 10% service) is not just about having the same number of students split up into smaller classes; juggling and planning for multiple classes is more work. As Rohan Maitzen pointed out on Twitter, teaching involves more than “just standing there” (many hours of preparation, for example).

To continue with the theme of prestige and the devaluing of teaching, what I noticed when I read the book excerpt is that the word “university” is going to be applied to the new institutions partly as a means of marketing them to squeamish students. The authors state explicitly that “every effort in Ontario to create a label that resides in between colleges and universities – such as “institute of technology,” “polytechnic university,” “university college” and the like – has failed to find acceptance and has led to requests for further changes.” Yet somehow “mission drift” — the tendency of universities to want to climb the ladder to a more research oriented status — must be prevented through government regulation and a strict mandate.

This is one reason why existing institutions may be disappointed if the think they will be sharing in the new expansion. What “new” means is not an extension of other campuses, nor a conversion of an institution of one kind into a different type (i.e. college into university); what’s desired is a “clean slate.” A likely goal is to save money by preventing the duplication of governance structures like unions and tenure, because these reduce “flexibility” and increase costs. This could lead to the 31% cost savings predicted by the authors, who nevertheless expect the new universities’ faculty salaries and benefits will remain competitive (my prediction is that salaries will be lower).

The purposes for building new teaching universities are not just pedagogical but also economic and political. Providing more access to postsecondary education is politically expedient and also matches the economic logic of the day, which is that building human capital for the knowledge economy can only occur through increased PSE acquisition. But as Harvey Weingarten pointed out at the OISE conference, campuses can’t be built unless there is government funding available for the purpose — and now we’re hearing that there isn’t any funding. I suspect that the release of the Drummond Report this week only confirms this, adding pressure to the process of imagining new teaching-intensive universities. It may now be even more difficult to ensure that pedagogical rather than just economic logic is what wins out.

Teaching the ineffable

Working with prospective teachers led to this post about what teaching “looks” like, how we imagine it should look, and how our expectations inform our assumptions about learning. Here is a link to the original post from February 9, 2012: Teaching the ineffable.

Recently I’ve been re-thinking (again!) about what it is that we (citizens, persons, societies) expect from education and how this related to the nature of knowledge and learning. At the moment I’m working with a small group of teacher candidates (TCs) in a concurrent B.Ed program. The course, which is a requirement for the students to progress in the program, is structured around attending a weekly placement at a community organization.

The idea behind the practicum is that TCs can encounter new kinds of experience, knowledge and relationships through their placements. Often the placements don’t involve what would normally be seen as “teaching,” and they usually don’t happen in schools; this is an attempt to unbind the idea of teaching-relevant experience and knowledge from what is most evidently associated with institutional environments, i.e. the school and the classroom. Learning happens everywhere; the school walls don’t define the space of our educational experiences. The community and the city, in turn, become sources of knowledge from which schools/teachers can learn to better understand, engage and include students and families.

What seems to be the case with many students is that they’ve inherited a blinkered view of what kind of experience “counts” towards one’s teaching practice, and of where and how this experience is to be acquired. This idea is entirely reasonable considering the systemic contexts in which most people encounter education and teaching at the primary and secondary levels. “Learning,” to many of us, is something long associated with formal education rather than with the everyday interactions and encounters that happen around the community and elsewhere. But children (and adults) learn everywhere, and that informal knowledge is something that shapes people’s experiences in formal settings.

Nothing about this approach feels easy. Many undergraduate students are still very “close” to their own experiences of education in the public primary and secondary system, which shape their expectations of the B.Ed. They’re also experiencing the pressure of professionalization within an explicitly defined field, so naturally they’re seeking out what will help them to fit into the system (not what will problematize the profession itself). Without professional skills, they won’t find employment. And I have my own challenges to face: how do we work within — and work to expand — the parameters I just described? How does one assess engagement, social justice, ethics, reflexivity, or the ever-elusive “critical thinking”? What happens when the assignments have no fixed value as part of a grade?

I use the example of TCs because it’s readily available to me, but I would say the dilemma about motivation and the pursuit of “relevant knowledge” is a version of what’s experienced by many undergraduate students (and by the faculty who teach them). It informs how students pick their classes and academic programs, and it’s directly connected to what each of us imagines we will need to go forth and succeed in the wide world. What I’ve noticed is that the quest for only what is immediately or obviously relevant is usually accompanied by an instrumentalist view of learning as well: “how many pages should I write, how many sources must I use, for what percentage of my grade?

Our program strives to foster reflexivity between practical experience and theoretical questioning. Philosophically, this approach is not new: Alfred North Whitehead argued in 1929 that “ideas that are merely received into the mind without being utilized, or tested, or thrown into fresh combinations” are “inert”; John Dewey advocated a kind of problem-based/experiential approach over 100 years ago. Knowledge is in itself an experience, one that must be inhabited; it is a house we build for ourselves. But knowing about something takes work. Often a lot of effort is required not only to make contact with knowledge but also to “make it one’s own”, a part of oneself that goes beyond memorization and regurgitation. For example, we all know that having “done the readings” for class is not the same as having an understanding of what an author or theorist is really saying.

As educators we can only help students to a certain point — the point of interpretation beyond which their own motivation must carry them. Until they’ve had that experience of knowledge, it’s very difficult to explain the “goal” of learning beyond what seems technically useful, and to show that moments of deep understanding can be life-altering. Perhaps it’s true as Whitehead stated, that “in education, as elsewhere, the broad primrose path leads to a nasty place”-? And if the thorny route is the one that takes us where we want to go, how do we convince students that what lies at the end is worth the walking?

Access denied? – Considering SOPA and higher ed

This post addresses the potential implications of US SOPA and PIPA bills for the larger higher ed landscape. Here is a link to the original post, from January 23, 2012: Access denied? Considering SOPA & higher ed.

Unless you’ve been offline and away from your computer for the past week, you have probably seen or read something about the many Internet site “blackouts” in protest of the U.S. bills SOPA (Stop Online Piracy Act) and PIPA (Protect IP Act), with high profile demonstrations and shutdowns from Wikipedia, Google, Reddit, BoingBoingand others.

In the course of my various degrees I’ve never had a class on intellectual property (IP) issues, and though I find it difficult at times to keep up with the details of the policies, I think it’s important that we all learn something about these issues given their increasing relevance to education.

As academic librarians stepped up via Twitter to help out those panicked undergrads who couldn’t function without a Wikipedia page to steer them in the right direction, I wondered in what ways my own research process is (or is not) entangled with the political, legal and technical issues raised by SOPA/PIPA. Revising, adding to, and sharing research materials is an ongoing process, one that I couldn’t have developed even 10 years ago because the tools — many of them online — simply weren’t available. At the same time, the information “field” is now so huge that it’s hard to know where and how to begin our searches, and the search is in no way restricted to library databases or to academically sanctioned channels of information seeking (Google Scholar is generally my first stop these days). What exactly is “content” now and how do we find it?

For example one problem is that SOPA/PIPA could affect content on social media sites like YouTube, Facebook, and Twitter, as discussed in this TED talk by Clay Shirky. Shirky discusses how we not only discover, but also share and create content using the Internet. This is an important point — as students, teachers and researchers, we’re now using the Internet for much more than just straightforward searches for academic content. As well as the more popular sites, specialty tools such as MendeleyDiigoAcademia.edu and more are examples of how social networking and online information sharing have started to change what educators do and how we connect with others.

Though the example isn’t a parallel, Canada’s PSE institutions have already had copyright problems related to the increasing digitization of research and teaching materials. Many of us experienced first-hand the effects of changes to Access Copyright when a number of universities decided not to use the service anymore, after the tariff per student was to be more than doubled. This past September was, as I recall, more hectic than usual as we waited for course readings to be approved, assembled and copied so students could purchase and read them for class.

As others have pointed out, it was also during the past week that Apple unveiled its new online textbook project. Sadly, but unsurprisingly, it sounds like Apple wants to link the use of its textbook apps directly to expansion of the market for iPads by creating a new technological territory and governing it solo. At worst, this buys in to the notion of technology as academic panacea while also cynically making the play to generate the technology on which education will come to rely. In other words, it’s a tidy business move; but will it work — and what will be the implications for knowledge and for already-stratified education systems, if it does? It may be nice to see education “front and centre” but not, in my opinion, when the goal is to create a closed economy.

While SOPA/PIPA has been postponed indefinitely, the issues it raises will not disappear. Even as we find ourselves with a new freedom to find research materials and share these with others, our new relationships and sources of information are dependent on systems that are beyond many people’s reach and understanding. Even if we learn how to code, to make our own apps, are we not still using infrastructure that is controlled elsewhere and could be policed or shut down without our consent? We need to pay attention to the changing information infrastructure (its physical, legal, and political economic aspects), since the changes made today can and will affect our capacities as researchers and teachers in the future.