Myths and mismatches, part 7

This is part of a series of posts that was written as a response to – and a means of thinking through issues raised by – an e-course by Jo VanEvery and Julie Clarenbach called “Myths and Mismatches”. Here is the link to the original post, from January 15, 2011:“Myths & Mismatches” Part 7: How to Apply Yourself.

Closely tied to the idea that “Academia is the only game in town” and that “You’re not qualified to do anything else“:

“Myth #4: School is the only place for smart people.”

Jo and Julie pose the question, “why are we telling ourselves that if we’re smart, we must necessarily go for the highest degree possible?” One answer would be that this is how the system works; certainly Ken Robinson makes this argument, that the entire educational apparatus is designed to perpetuate itself by allowing those most successful to ascend to the level of Professor. When or not one agrees with the rest of Robinson’s theses, this point is useful because it highlights the process of replication that becomes especially important in graduate education. This can be stultifying; not only is the government agenda to push PhDs out of the university, but “if the last twenty years have taught us anything […] it’s the power of smart people outside of school”.

Not only is “school” the only place for intelligence, there’s also a hierarchy of knowledge. I know when I was considering doing my PhD in Education, I was advised not to (by more than one person) essentially because the discipline wasn’t respected; this seems to relate to a long tradition of Education as a research area being perceived as less valuable and prestigious than other disciplines (for some history on this, see “An Elusive Science” by Ellen Condliffe Lagemann). I’ve also heard of top students being advised not to apply for their B.Ed, for the same reason–teaching as a profession isn’t respected the way law, medicine and engineering are. The irony is that we need teachers to be the smartest people we can find, since they’re the ones preparing the future generations who’ll be running this place when we’re all too old to participate. Seems straightforward enough to me.

To be considered very smart and to do something other than remaining in academe is to violate expectations; after all, academe is supposed to be the one place where intellectual merit is rewarded most highly. But “what if we could bring our smartness to bear on whatever it is that makes us passionately, excitedly happy? For some people, yes, that will be academia. But not everyone.” I think this summarises my attitude – I want to be as effective as possible at something, given my own abilities and limitations; I need to feel like I’m doing something towards whatever my goal is (though the goal itself is evolving, and has always been so over time).

For myself, I do think it’s reasonable to view a university career as a good fit if I can engage in the things that are meaningful/productive to me (such as teaching, writing a book, being around other intellectually engaged people, communicating/engaging with different “publics”, and so on). I like the structure of the academic environment because in spite of its flaws, it helps motivate me and at its best it gives a kind of institutional form to practices and values I find important. And I think the university should be a place where new ideas can be tried out – where faculty also have a responsibility to voice critical viewpoints, to “engage” with larger audiences. Knowledge is political, that’s one of the things that draws me to this career; and the university is an ongoing project in which all members have some role. I find the perverse balance between tradition and innovation to be at the heart of the university, and rather than destructive I think this struggle is its very reason for continued existence over thousands of years.

But all this is about more than being “smart” or a good writer – it’s about negotiating the whole package, warts and all, and that’s part of what this whole series of posts has been about. You can be smart and do a hundred other worthwhile things, it’s just that this isn’t necessarily the message you’ll get while you’re at university, particularly in graduate school. If the whole package doesn’t end up working out, there are other, equally meaningful forms of employment to which you can apply your considerable skills and training.

Myths and mismatches, part 6

This is part of a series of posts that was written as a response to – and a means of thinking through issues raised by – an e-course by Jo VanEvery and Julie Clarenbach called “Myths and Mismatches”. Here is the link to the original post, from January 14, 2011: “Myths & Mismatches” Part 6: Getting Priorities Straight.

“Mismatch #3: Mismatch of Priorities.”

As discussed in a previous post on the “Life of the Mind” and pointed out by Jo and Julie, “structurally, if not ideologically, academia still takes as its ideal employee the gentleman scholar”. This means that if you have commitments to anything other than work, you’re implicitly being considered an exception to the ideal. And while this is more or less obvious depending on context (and has been mitigated over the past half-century), in general academic institutions continue to be built on the assumption of this invisible ideal (which is similar to the assumption of a certain kind of student).

I think I also previously linked to a blog post about this, but the gentleman-scholar “model presumes that someone else — let’s call her a wife — is doing all of the other work necessary for a life”. This would be the person who tends to all the details of everyday living that are presumed not to trouble the solo academic, including of course domestic duties such a housekeeping and child-rearing. This stereotype is still quite real; consider the phenomenon of the “faculty wife” (written about here and here), while the “two-body problem” is encountered by partnered academics moving to new institutions. [Update: here is a new post from Jo Van Every on the same topic.]

We all have other things in our lives beyond our jobs, and these can be accommodated with varying degrees of success depending on context. The problem arises when we cannot reconcile academic life with other parts of life, because of the nature and demands of successful full-time university work.

Jo and Julie note that “a mismatch of priorities is often read as a lack of commitment — if you really cared about this profession…” I want to point out that this problem is likely to be gendered; for example, women are the ones who take leave during a pregnancy and after children are born. Male academics are far less likely to have their tenure time-line affected by this, while women may be viewed as “less committed” to work if they choose to start a family.

I’ve definitely questioned my own priorities in light of the above assumptions. If I “really cared”, I’d be willing to go anywhere to find the right academic job. If I cared enough, I’d take contract teaching work while applying for every tenure-track job of relevance that came up. Or I’d have had five publications by now. But I know I’d rather find a job in another “sector” when my PhD is finished, if it seems that there’s no chance of getting something worthwhile at a university–which of course means I’m not “committed enough” to academic life.

I don’t know–maybe that’s true, but the most important thing is that I’m committed to my life.

Future tense

This post addresses how students are often preoccupied with the future because they’re insecure in the present (particularly financially, but in other ways too). No-one can really blame them from wanting to know where university will take them, since after all, they were told they had to go to university in order to get work later. If you don’t know much else about it, it’s hard to comprehend what else education might be for. Ironically, this means it can be harder to tap into the desire that’s needed in order to excel at university learning. Here is the link to the original post, from March 24, 2011: Future tense.

Perhaps because it’s grading season—mid-term exams and assignments have been rolling in and TAs and course directors are dealing with the results—over the past few weeks I’ve been seeing a lot of frustrated talk from academics on Twitter and Facebook. Some of it’s angry, some of it’s more anguished than anything else; but the common thread is that we’re all feeling as if we can’t “reach” students, and that students in turn aren’t doing their share of the work involved in the educational process.

Part of the problem is the way I just defined “education” in that last sentence. I invoked the notion of education as a “process” involving effort from both the person assigned as “teacher” and the people being “taught”; I don’t assume the students are the only ones doing the learning. But as I’ve argued in the past, a consumerist model of education—which encourages students to view education as either a service or a product or some mutation that blends both (“service product”)—undermines the notion of active participation because it assumes a strong element of “delivery” rather than “co-production”. We had a discussion about this in a recent tutorial where I pushed the knowledge-as-object metaphor to its ridiculous limit by drawing on the image of a “basket of knowledge” that we could pass around the room and from which students could simply take what they needed.

Apart from this definitional misunderstanding that causes so many conflicting assumptions about responsibilities and self-conduct, I suspect there are even bigger issues at work. I like asking of students, “how did you know you should go to university?” The reason I ask is because I’m interested in where that decision came from, not just the “why” of it. When we ask “why did you come to university?”, the answer is usually predictable—“because without a degree I cannot get a job.” If we ask how the decision was made, responses are usually quite interesting, and they reflect the influence that parents, teachers and guidance counselors have on students’ decision-making processes.

But what happens to the “work preparation” narrative when students realize that a university education is no longer any guarantee of employment, let alone the “dream jobs” that so many young people are encouraged to envision for themselves? I think this is where the whole arrangement starts to fall apart. You can tell students there are rewards (e.g. in the form of post-graduate employment options), and indeed the statistics continue to point to the financial benefits of PSE for graduates. But if you offer students no (clear) path to those rewards then the result is sometimes a disaffected nihilism towards learning. And one problem with university education is that is was never really designed to offer a clear path to employment.

We need to get at the contradiction in the fact that students come to university because it’s “necessary” to get ahead in life, yet in some cases they show little or no enthusiasm for university learning and confusion that there is no obvious connection between what happens in class and what they expect to happen at a job, later on. I think this is why we sometimes hear disparaging comments about how “undergrad is the new high school”–necessary, but not necessarily enjoyable or productive.

I’ve been thinking a lot this year about why students “tune out” during class and tutorial, particularly when technology shows up as a distraction from class. Larger social, economic and educational trends are one reason for effects such as these, for example the consumerist concept of education as “product” often correlates with students’ focus on grades (outcomes) rather than learning (which often irritates professors and TAs).

We can’t take on those big issues alone, in one course, in one university; they’re ongoing and need to be addressed and re-addressed by everyone. The question is how to navigate these currents when we’re faced with the everyday “realities” and frustrations of teaching in universities–grammatically unsound assignments written in haste because students are working 20 or 30 hours a week alongside full-time study (so who’s to blame?); flimsy excuses for skipped tutorials (who can we believe?); papers submitted weeks late without notifying the professor or TA that an extension was required (how could we know?); students “burning out” and disappearing without even dropping the course (what happened?); and on, and on.

Now more than ever we’re reminded that education is a collaborative effort, and behind that effort must be desire–the desire of the person “teaching” to assist, collaborate and convey; and that of the students, a hunger for knowledge based in questions about the world. Last night in class I talked about how I became interested in education and involved in politics, and how in my experience the key ingredient to success in university is to find some thing about which you have critical questions, a boundless curiosity, a constant hankering, an “itch” that can only be scratched with learning. I think then the learning starts to drive itself.

The difficulty lies in getting to those questions and issues, since their instrumentality for the future is obscure in the present. It’s why I told my own story–because students lack narratives they can use to order their present experience, and the tools to construct their own potential narrative; so they find it hard to project into the future even though they are so focussed on it. This is an anxiety-producing state of affairs.

New possibilities open up when we make the connections required to understand a story about how something happened, rather than a description of what is. Maybe it’s this causality that students crave, since they live in a world lacking the certainty with which their parents were so fortuitously blessed. The old stories about careers, adulthood and family no longer ring true in this era of instability, workforce “flexibility”, debt and recession.

Perhaps the universities should be places/spaces where we start telling new stories.

Myths and mismatches, part 5

This is part of a series of posts that was written as a response to – and a means of thinking through issues raised by – an e-course by Jo VanEvery and Julie Clarenbach called “Myths and Mismatches”. Here is the link to the original post, from January 11, 2011: “Myths & Mismatches” Part 5: The Myth of Academic Meritocracy.

Today’s “myth” from Jo and Julie is possibly the biggest one of all, and thus the most destructive should you buy into it whole-heartedly. It ties in with every other point that’s been made thus far in this series…

“Myth #3: Merit is Everything.”

I just want to point out that my response to this issue is always a very personal one, for reasons I will partially explain below.

For the record, the ideal of meritocracy–that you succeed at academic work primarily because of how smart you are–is a myth (as Jo and Julie state: “Excuse our language, but this is all a fucking load of steaming crap”). And there are plenty of examples that illustrate it. One of them is the issue of socioeconomic class, something that has an effect literally from birth. In the research on post-secondary education (PSE), SEC is a clear factor and yet one that various researchers attempt to mitigate by making the claim that cultural capital matters more than economic capital. Any study you’ve seen that makes claims about the improving influence of the “number of books in the house” is a study making claims about class and culture in this way. The problem is that if you used the available statistics to draw a nice Venn diagram, you’d discover that the overlap between “class (economic) privilege” and “cultural capital” makes the diagram look more like one circle than two. Translation: you may have more books in the house, but you might not have the money to pay for an academically elite private school, or even for the extra tutoring that improves your grades and helps you win that merit scholarship. Money matters at least as much as “merit”.

Money also matters when you decide it’s not worth going into $35,000 worth of debt to finance your degree, even if a degree is “an investment that really pays off” as the research tells us (again and again). I know I didn’t want to go to graduate school if it meant I’d have to increase my student loan burden. Does that mean I would have been somehow “less smart” if I hadn’t gone? As it turns out, my grad degrees have been financed primarily by merit-based scholarships. Does that mean I’m now, somehow, inherently smarter than you? (Hint: the answer is “no”.) In the PSE research literature, this attitude of mine is called “debt aversion”. To me, coming from what would financially be called a working-class background, it’s called “common sense”.

Socioeconomic class is only one of the reasons why “merit” is a concept that draws a veil over the causes of “success” and “failure” in academe. But it’s the one with which I have the most intimate familiarity, and it’s why this response of mine is mostly about money/class/privilege vs. merit.

Jo and Julie write that the myth of merit-based success “doesn’t build us up -— it makes us live in fear that, any day now, someone is going to figure out that we aren’t as smart as they think we are, and then they’ll kick us out.” This is why so many (particularly female) graduate students suffer from what’s known as “imposter syndrome“.

But what I’ve noticed is that some people seem completely impervious to the debilitating threats to self-confidence – the daemons of self-doubt – that I know I have wrestled with in the past and continue to battle on a regular basis. Who are those people, and why do they seem so certain of their own place, of the value of their work, and of their intelligence? Career development in academe is dependent not only on how “smart” you are, but on your own assessment of your capacities and how your put that to work; and because we want to believe in “merit”, we often denigrate our own efforts and doubt ourselves even when we succeed (it was “luck”, or something else). The required confidence is harder to develop when you’ve spent your life not being outstandingly successful, and you’ve been assuming it was entirely due to your own deficiencies as opposed to other factors.

That self-interrogation of course informs the comparisons we’re (tacitly) encouraged to make between ourselves and others in grad school. We look at what other are doing, wondering why they seem to be “succeeding” when we’re not. Why do some people seem to be able to effortlessly afford that trip to the conference in San Francisco or Sydney or that three-month stint touring the Far East? Significantly, success in academe also depends on the capital you can invest in further professional experience, where additional available resources mean not having to take on two extra jobs to finance your conference travel (or pay the rent!), thereby losing time you could have spent on researching. Success, in the form of useful capital, builds on itself.

As someone who’s currently riding out my second large merit-based scholarship, obviously I have extremely mixed feelings about the concept of “merit”… on the one hand I represent, statistically, an aberration that should prove the effectiveness of meritocracy: a student without economic means who’s been able to get to the doctoral level, and to do it by winning awards for academic excellence. But sometimes all I see are the thousand other ways in which this story could have ended, the many times I felt like dropping out because I was so sick of being broke and angry and tired and stressed, and the others I knew who were smart and talented and dedicated and still didn’t win the scholarships I won, and who did leave, blaming themselves all the way. I tell myself I made the right friends, got the right advice, stepped into the right subject area at the right time. Surely these were the things that stood between me and a return to a past where I washed dishes for a living instead of marking undergraduate essays.

The line feels that slim–a paperwidth of possibility–one that can be “re-crossed” at any time, given the assumed tenuousness of my success. Because I will probably never feel as if I truly deserve what I have.

Communication, not edutainment

I wrote one of my University of Venus posts in response to the idea that undergraduate students seem to be easily bored by many different topics. rather than banning them from engaging with “distracting” technologies in class, perhaps we could try to connect with them more and figure out where the roots of that boredom are buried. Here is the link to the original post from March 3, 2011: Communication, not edutainment.

How do we, as tutorial leaders or professors, deal with the revelation that students find classes or entire subject areas “boring?” And to what extent is it our responsibility to get them “interested?” These were questions that came to mind as I read Itir Toksöz’s recent UVenus post about “academic boredom”. While she was discussing the boredom she experiences in conversation with colleagues, my first thought was that boredom is not just (potentially) a problem for and with academics, but also for students.


I see boredom as something other than a mere lack of interest. I think of it as a stand-in for frustration, which can, in turn, stem from a sense of exclusion from the material, from the discussion, from the class, from understanding the point of it all; ultimately an exclusion from the enjoyment of learning. This can happen when the material is too challenging, or when the student doesn’t really want to be in the class for some reason.

Boredom is sometimes about fear, the fear of failing and looking “stupid” in front of the instructor and one’s peers. In other cases it can also be a symptom that someone is far beyond the discussion and in need of a deeper or a more challenging conversation. All these things can be called “boredom” but often they are more like communicative gaps in need of bridging.

In other words, boredom is often a mask for something else. We need to remove this mask, because of the negative effects of boredom on the learning environment and process. It causes people to “tune out” from what’s happening, and in almost every case it creates or is accompanied by resentment for the teacher/professor and/or for the other students. As a psychological problem, this makes boredom one of the greatest puzzles of teaching, and one of those problems that most demands attention.

It’s even more important to uncover the causes of boredom now that many students have access to wireless Internet and to Blackberries and iPhones, in the classroom. Professors and TAs complain that students are less attentive than ever while in class, because of this attachment to their devices—something I’ve encountered first-hand with my current tutorial group.

I think the attachment to gadgetry comes not from the technology itself, but from the students. In my blog I’ve written about the issue with students using technology to “tune out” during lectures, and they do it in tutorial as well; they’re “present, yet absent”. To understand this behaviour we need to keep in mind that the lure of the online (social) world is reasonable from the students’ perspective. Popular media and established social networks are accessible and entertaining, and provide positive feedback as well as a sense of comfortable familiarity. Learning is hard work, and the academic world is often alienating, difficult, and demanding. It’s all-too-easy to crumple under the feeling of failure or exclusion. Facebook is welcoming and easy to use, while critical theory is not.

The other side of this equation is that in the process of negotiating and overcoming “boredom” there’s a certain point at which I can meet students halfway, as it were—but I can’t go beyond that point. Like everything else in teaching and learning, boredom is a two-way street, and the instructor is the one who needs to maintain the boundary of responsibility. I’m not there merely to provide an appealing performance, which leads to superficial “engagement.” I’m not “edutainment”.

However, I think it’s part of my job when teaching to “open a door” to a topic or theory or set of ideas. I can’t make you walk through that door (horse to water, etc.) but I can surely do my best to make sure you have the right address and a key that fits the lock. And that means using different strategies if the ones I choose don’t seem to be working.

Holding this view about boredom certainly doesn’t mean I’ve solved the problems with student attention in class; I’m reminded of that frequently. It just means I have an approach to dealing with the problem that treats their boredom as something for which there’s mutual responsibility. In an ideal learning environment there must also be mutual respect—but unfortunately mutual “boredom” is easier and often wins the day. My hope is to help cultivate the former by finding ways of unraveling the latter.

Myths and mismatches, part 4

This is part of a series of posts that was written as a response to – and a means of thinking through issues raised by – an e-course by Jo VanEvery and Julie Clarenbach called “Myths and Mismatches”. Here is the link to the original post, from January 11, 2011: “Myths & Mismatches” Part 4: Structural Faults?

Continuing my weeklong blogging escapade of commentary, today’s “Mismatch” from Jo and Julie is one that relates quite directly to my own research project on governance of universities…

“Mismatch #2: Mismatch of Structure”

Structure relates to the functioning and ultimately to the purpose of the university. Jo and Julie write that the purpose of the university is to “transmit the best that has been thought and spoken (i.e., maintain tradition) and advance the state of human knowledge through novel research (i.e., innovation)”. And they rightly point out that there’s something of an inherent contradiction between those two things, one that is dealt with in different ways depending on things like disciplinary context.

With the changing context of the university as institution comes changes to the way academics are expected to do their jobs, including how they work with colleagues, where their funding comes from and how it’s allocated, how teaching appointments may work, what’s expected in terms of research and “engagement” with scholarly work and life, and so on. Jo and Julie cite the example of interdisciplinary work and the (lack of) institutional structures designed to facilitate it, and one of the ways in which even the best candidates in graduate school can “fall through the structural cracks”.

In spite of what looks like an obvious topic of study (post-secondary education), I’ve found that my own work seems to be pretty interdisciplinary–probably because of my background in multiple areas of study, which in turn is feeding (I think) an existing intellectual tendency. I follow paths that interest me and I’m usually focussed on some specific kind of “problem” or issue. If there’s an answer to my questions in another discipline, then I tend to start extending myself and sniffing around that territory in search of something useful for my purposes. And in the process of this, I’ve realised that interdisciplinary/”innovative” work is or can be fairly unsafe, depending (again) on the environment in which you’re working and on what your goals are. It’s hard to build an academic career in an environment rooted in disciplinary distinctions when you’re not sure which conferences to apply to, which scholarly associations to join, and (my own current problem) which journals would be appropriate venues for your research.

My tactic thus far has been to take “slices” of things and relate them to specific disciplinary areas, e.g. if a particular paper or presentation topic relates more heavily to Communication Studies, then I take that into account and try to tailor it to that perspective. It doesn’t always work, but it gives me something to start with. My hope is that knowing the norms and expectations of this environment will help me to find ways to work within the existing/evolving structure, even as I’d like to be a part of changing it–though as Jo and Julie note, “the university has a lot more inertia than you do” so to expect to make your own “place” within it is to take on a complicated (though obviously not impossible) task.

You may not feel like you really “fit” anywhere, but this feeling can have different causes and implications. It could signify that you’re on the “cutting edge” and doing work that will in time have an important place, but it’ll be a place you’ll have to carve out for yourself. Or it could just as easily mean that you should be looking for a career in some other arena that better accommodates your interests and needs–and as I’ve discussed previously in this series, there’s no reason why academe needs to be the only environment in which you can write, think, and produce scholarly work.

Myths and mismatches, part 3

This is part of a series of posts that was written as a response to – and a means of thinking through issues raised by – an e-course by Jo VanEvery and Julie Clarenbach called “Myths and Mismatches”. Here is the link to the original post, from January 9, 2011: “Myths & Mismatches” Part 3: Assessing Your Qualifications.

Today’s “myth” from Jo and Julie is a real classic, something that can be unconsciously inculcated from the moment you enter graduate school-! And it’s this…

“Myth #2: You’re Unqualified to Do Anything Else”

This is the illusion that even after successfully completing a PhD, there’s still no-one other than a university who’d hire you–because what “real-world” relevance is there for your academic training? (And look–there’s that “Real-World/Academia divide again.) Part of the reason for this assumption is that in graduate school, the focus is placed heavily on “content knowledge” and not on the skills and “process knowledge” that come along with grad school experiences. And (discipline-specific) content is generally less transferable to work outside the university.

This is an idea that works alongside “Myth #1”, that “success” after the PhD means becoming a tenured research professor (and that any work outside the university is somehow “lesser” than an academic job). Not only are you unqualified for a job in another field; it would also be an admission of inadequacy to abandon the quest for tenure-track employment. In some cases this line of thinking can be quite potently inhibiting.

As the authors point out, “the reality is that, outside of academia, most jobs are far more about your skills than about your content knowledge – and just by virtue of having been through graduate school, you’ve amassed a lot of relevant skills” relating to research, writing, editing, presenting, organizing, collaborating, assessing, teaching…the list goes on.

I still feel as if I’m simply not aware of most of the job options I have in front of me (but with a much better sense of possibility than I had several years ago). Though I’m in a position where my topic of research is one that can apply in more than one context, I still have so little idea of my own usefulness outside the university classroom–and how to put that to work. I’m fairly sure I still have talents I haven’t yet discovered, and I think that’s been the major lesson I’d take away from the past 8 years or so. After all, when I abandoned my BFA after two years, I never imagined I’d end up studying Communication Studies, Linguistics, and Education (and doing well at it). I know I have a lot of fears and insecurities to overcome, but I think I’d rather feel significantly uncertain than feel as if I’m staking my career on only one prospect.

Jo and Julie also write that “academic disciplines act as though they’re in competition with one another, viciously defending methodological and content boundaries between fields that one might think would have lots of things to say to one another.” I don’t know if it’s my own interdisciplinary background or perhaps a kind of inherent pragmatism, but I’ve never held much to the maintenance of boundaries between different kinds of knowledge. My reasoning is that I’m more likely to be able to address a problem critically if I can do it from multiple angles; and that is a skill highly applicable to the “real world”.

Lastly, there’s “a general denigration of intellectual work” in our culture (speaking broadly about Anglo-America), such that what is “academic” is considered to be irrelevant, disconnected from reality somehow–like academics themselves. This is reinforced by the beliefs we may hold about the “narrowness” of our education, beliefs that can prevent us from seeing our own value in contexts other than academe. They can also prevent us from learning how to communicate the relevance of intellectual work to larger publics, which is a increasingly an expected function of faculty work as well.

Myths and mismatches, part 2

This is part of a series of posts that was written as a response to – and a means of thinking through issues raised by – an e-course by Jo VanEvery and Julie Clarenbach called “Myths and Mismatches”. Here is the link to the original post, from January 8, 2011: “Myths & Mismatches” Part 2: Time, Place, and Opportunity.

“Mismatch #1: Context”.

It’s a great idea to address conflicts of context, the “external circumstances” that have an effect on our career successes, because a lot of self-destructive psychological baggage can come from the idea that one’s “failure” is entirely one’s own fault. And while it’s important to take responsibility for your own decisions, just as crucial is the ability to recognise when your (lack of) “success” is being influenced by factors beyond your control. These factors can include anything from personal issues with health and family, to a simple lack of appropriate positions or an over-supply of candidates in your particular academic field; they are “more about timing and luck than […] a comment on your worth as a person or quality as an academic”.

In spite of the sense of it, I feel quite ambivalent about this point. because if I looked at the list of contextual factors in my own case, I’m pretty sure I’d pick another path to follow. That’s not meant as a comment about my own capacity–more as a point about the nature of the academic job market, which has declined considerably in the past 25 to 30 years. One reason for this pinch is that the “production” of PhDs has increased; and another is that simultaneously, the proportion of tenure track academic positions has actually decreased as universities have come to rely on short-term contract faculty (or “adjuncts” as they are referred to in the U.S.).

So I do feel uneasy about the context in which I’m finishing my own PhD, one that I think is becoming more evident to more people, though I don’t recall that there was ever a frank discussion of prospects and odds during any of my graduate courses. While the PhD is not just about “getting a job”, I think career-development should be emphasised from the beginning in a more well-rounded fashion so that by the time students reach year 3 or 4, they have a better sense of their options and a balanced idea of what factors they can “control” in terms of later employment options. This could be seen not as simple “job training” but as a reasonable/thoughtful process in which to engage considering the significant commitments of time, effort and resources that are required to complete a PhD, and the shrinking chance of achieving a tenure-track faculty position. It could also help graduate students to develop awareness of their strengths and capacities, and to build the resilience and adaptability that help with creating and navigating through a professional career (in whatever field).

Myths and mismatches, part 1

This series of posts was written as a response to – and a means of thinking through issues raised by – an e-course by Jo VanEvery and Julie Clarenbach called “Myths and Mismatches“. According to Jo and Julie, the “goal with this series is to help you understand your experience [in academe] as both personal and structural.” This was a helpful series for me, since I was in the process of thinking through the implications of seeking a tenure-track job (hence the in-depth blog responses).

Here is the link to the original post, from January 8, 2011: “Myths and Mismatches”, Oh My!

Over the next week or so I’ll be blogging my responses to “Myths and Mismatches“, an e-course by Jo Van Every and Julie Clarenbach. The goal of this series is to bring attention to a number of “myths” that can get in the way of making “conscious career choices” in the academic environment, particularly for those who are feeling “dissatisfied” with academic work.

I’ve been thinking about this a lot lately (and blogging about it too), since I need to make decisions about “where to go” next, and I find the options overwhelming. I thought it would be interesting to think through my responses to Jo and Julie’s course by writing about each of them as they arrive in my inbox.

“Myth #1: The Life of the Mind, or, Academia Is the Only Game in Town”

The first post refers to a misconception about the nature of academe, the idea of the “Ivory Tower”–one that is perpetuated by media images of university life. Jo and Julie advise us not to fall into the trap of imagining “academe” as a cloister into which one can retreat from the Real World whilst pursuing one’s ideas in peace among like-minded colleagues (and as far from possible from demanding undergraduate students, for example).

I would say it’s no coincidence that this concept of the Lone Scholar is reinforced by the ideal of the tenured research professor, which we’re generally encouraged to think of as the norm or the goal. If this utopian environment/position ever came close to existing, it was a characteristic of the traditional “elite” model of university education, something I’ve written about in previous posts.

The point here is that given the current context, you’re certain to be disappointed if you see this as the ideal, since the job description for professors includes juggling not only research but also teaching, committee and other “service” and administration work, student advising and mentoring, attending and planning events and conferences, and and array of extra-curricular work/activities. In fact the trend is for professors to be more “engaged” with audiences beyond the university because ultimately, public communication is what strengthens and smooths the relationship between universities and the communities/contexts in which they operate.

In terms of my own experience, I don’t think this idea of the “life of the mind” has ever been one to which I’ve had much access; and as wonderful as it sounds, I’ve also never really expected to be able to participate. Jo and Julie make the point that the mythical Great Solitary Thinkers were all men, which is only one part of that equation; there aren’t too many role models to emulate. I also don’t come from a particularly privileged background (economically or culturally), so my expectations have been different all along. I certainly never imagined I would end up doing a PhD at all. Since my undergrad years I’ve talked a lot with full-time faculty and had a good look at what happens in the day-to-day life of tenure-track professors (and part-time/contract profs as well). Probably the combination of these factors is why I’ve always felt ambivalent about the idea of trying to become a professor, as a specific career track. The increased competition in recent years has only made me feel less certain.

Jo and Julie point out that the flip side of “academe as intellectual cloister” is that the “world” outside the university is a barren and banal place, devoid of intellectual engagement. I think the myth of “real world” vs. “academe” is quite destructive, including that of a corporate/business world that’s somehow inherently unethical and opposed to academe. It simplifies the problems faced by universities, often reducing them to an “us vs. them” argument, and it precludes the possibility of meaningful engagement across boundaries. This kind of belief also seems to entail that academe is somehow more ethical than other environments. But to cling to that idea is to set oneself up for a despairing fall–academics are no more (or less) inherently moral or “good” than other groups.