The value of a degree, part 2

This is the second of two blog posts address the topic of how we understand the “value” of degrees. Here is the link to the original post from December 21, 2010: “What Value for a Degree?” Part 2: Inherent value.

To continue from yesterday’s post about the “relative value” created when education is a scarce commodity, today I’ll write about inherent value–that which we are assumed to obtain simply by completing an educational credential.

Governments are concerned with developing “human capital”, which is the value of the workforce as measured by people’s skills and capacities for economic production. The argument is that the “knowledge economy” requires more and different skills of the workforce. This assumes that everyone should have more education because education will develop these skills (as economic value that resides in people). So by extension, there is an assumption that education has an inherent value—as something that contributes to the economy through the gross increase of human capital—no matter whether there are better jobs waiting for the graduates.

An assumption of inherent value also means that a financial payoff is assumed for the individual—so there is (economic) value in education for the individual student (or graduate, at least). This dovetails with the current (neo-liberal) policy trend of privatising the sources of PSE funding, including through raising tuition fees. Individual value means individual benefit, and therefore individuals should pay for this benefit.

But as discussed in my previous post, education does not benefit every student equally, so taking an “average” increase to earnings over a lifetime—which is the most frequent means used to “prove” the monetary worth of an investment in PSE—is not the best means of assessing the positive effects of higher education for the most vulnerable/least privileged students, who could benefit most significantly from them.

———————————————————————

In government policy there seems to be a confusion between an inherent value created by a university education (i.e. skills, training, knowledge) and the relative value of a scarce commodity. But what does this difference in concepts of “value” mean when it comes to public debates about education, and the kinds of policies that generate and are in turn influenced by those debates-?

It tends to mean that we fight for university accessibility primarily in the form of increased enrollments, then wonder why attrition rates are so high and why so many students seem to “fail” at maximizing the resources provided by universities (such as student services). It means that governments create targets for the number of university graduates to be “produced” and for the percentage of the workforce that should possess a degree, assuming the additional human capital will generate returns to national economic success–but that many graduates nonetheless find themselves struggling to get work due to a lack of jobs appropriate to their level of education. Never mind ballooning debt loads, since personal financial “returns” to education should take care of this (unequally distributed) burden.

But if there is no job waiting at the end of an expensive degree, then the personal “investment” made by the student is seen as a failed venture for which s/he takes primary responsibility (particularly if student debt is involved).

In the UK right now we can see a clear example of this logic at work. As the system has expanded continuing to use the elite model of governance, costs have increased while the economy has become increasingly volatile. Government response is to radically reduce funding for teaching and to allow universities to raise tuition. Students are told they must now pay for something that in the past was more or less free (i.e. for their parents), a situation that creates inter-generational resentment, producing as it does a lopsided distribution of payment for the lingering costs of expansion.

Yet students will continue to enroll (if places are provided), since university degrees are considered more necessary now, for more people, than ever in the past. It seems that the cost of education rises, and indeed the value diminishes, with increased demand–the opposite of how markets are supposed to work.

The value of a degree, part 1

This is the first of two blog posts address the topic of how we understand the “value” of degrees. I started thinking about this not just because of the ongoing commentary in the media on this issue, but also because a friend asked me about whether I think “too many people” have degrees, and I think that question gets to the heart of a debate that has significant policy implications. In these posts I reflect on what we mean by “value” and how the different underlying assumptions about this idea have consequences for the imagined purpose of all education (not just PSE). Here is the original link to part 1, from December 20, 2010: “What value for a degree?” Part 1: Relative value.

A friend of mine, who teaches at an English-speaking middle school in Hong Kong, recently asked me if I think too many people are going to college (university).

I think about this a lot, since completion and participation targets are often in the PSE news and in policy. I always find it a hard question to answer—partly because answer means asking ourselves about the purpose of a university education, and what precisely it is about university degrees that they are somehow assumed to equip young people with what it takes to succeed (economically) in the world. What is it that makes a university degree valuable and why is this important?

The focus for students, parents and governments is significantly economic, in policy and in practice—something that has become more the case over time as universities have moved towards “massification” (expansion) and more emphasis on private sources of funding (including tuition).

The benefits of post-secondary (and particularly university) education are expected to increase both the prosperity of individuals and the competitiveness of the national economy. So why is it important to question both the “graduation imperative” as economic policy, and the “accessibility” ideal as progressive social policy?

While in the past it was true that people who earned university degrees then went on to have more economic success, this was partly because university education was an eliteeducation. No more than 5 to 10% of the population had a degree, so it was a valuable thing to have. Higher education usually meant training to be part of an elite; for example, the traditional “liberal education” was training for a small, privileged group who would become the “leaders of society” in law, politics and business.

In a sense, we’re now saying that as many people as possible should have an education of this kind, which means that by definition a university degree ceases to be “elite” in the way described, or to provide any value based on scarcity. This doesn’t mean there is no other kind of value—only that a degree will no longer provide the benefits of a scarce commodity (to the extent that it did in the past). It also means that universities are and will be using more tactics to explicitly demonstrate the value of what they offer (marketing, advertising).

In a system in which we rank and label people, a lack of obvious comparative value creates a problem, since we need to differentiate in order to allocate. If in the past the university degree acted as a filtration mechanism or a stamp of elite approval, it was the case that you had to have money, family/social connections, and/or a lot of smarts and savvy to get one. But how does this “filtering” happen when everyone gets a degree?

The cynical (or perhaps realistic) answer is that a relatively “elite” group will still form, and it does; filtration still happens because our system is driven by a capitalist economic model that works as a hierarchy driven by competition. People are ranked (using grades, for example), and it’s understood that this is more or less a zero-sum game. And some people still start out with far, far more than others when it comes to securing the highest spots in that ranking.

Yet most education systems are premised at least to some extent on the concept of meritocracy, the idea that people succeed based on “merit” or “excellence” alone, rather than through forms of extrinsic (often material) advantage. Though we have plenty of examples to support the idea that meritocracy functions fairly—e.g. working-class kids who “make good”—the wealthier and well-connected students still tend to get the best jobs in the current climate, no matter how many others may have university degrees. And from the inside, it tends look like this is because of differences in cultural, social and economic capital, rather than “merit” alone.

 

 

Go on, have a laugh

When it comes to giving a good lecture, or teaching in general, I take inspiration where I can get it, and this post is about how I often think of favourite stand-up comedians when I’m trying to summon the confidence to speak in public (or to a class). I think humour can play a helpful role in teaching and learning. Here is the original link, from December 10, 2010: Go on, have a laugh.

NB, the Bill Bailey link in the original post no longer works, and I haven’t found a stable replacement. But it’s from the show “Part Troll”, which is worth watching in full.

This week’s long and rambling post, after a hiatus of about a month, comes out of my thoughts about the tutorial group I’ve been working with this term.

After each class, on the bus ride home, I think through the things that seemed to work and the things that didn’t. Which students were really engaged in class, and who was tuned out, playing on a laptop or sending text messages? Did we use media in the class and did that work well for the group? Did we look in a deeper way at the key points from the week’s readings, or did we spend a lot of time on irrelevant tangents? Perhaps most important, what was the overall dynamic in the room and did it help or hinder the discussion of issues important to the course?

Last week, I was “chuffed” when a student said she had remembered the meaning of a term based on a joke (a humourous anecdote) I had told about it. Her comment made me think about how humour is something I use in class, in a number of ways according to context—and I realise now that I’ve been ‘using’ it right from the moment I stepped into a classroom to teach for the first time. It turns out that my teaching role models are my favourite stand-up comedians as well as the best professors.

This led me to ask: What’s the function of humour in the classroom?

The more I thought about it, the more I realized that humour, being humour, simply isn’t taken seriously as a pedagogical tool.

And yet there’s a use for it. When I was first learning how to lead tutorials, humour had the function if dissipating my own sense of awkwardness at the situation. Since I wasn’t used to taking on authority, and didn’t feel comfortable with that role (i.e. the kinds of expectations there were from the students), the laughter made it easier for me to deflect and dissolve my own anxiety and that of the students as well as creating a “cushion” for those times when I felt incompetent and unhelpful (usually this was just my own perception as I later learned). Another effect was that students seemed to feel more comfortable in a classroom where a few laughs were encouraged.

To me, humour has also been a means of highlighting the ridiculousness of ‘normality’, which is an entry point to critique (for example I showed this sketch in tutorial, as a way of addressing essentialism). I can’t count the number of times I’ve found myself inadvertently ‘opening up’ (making accessible) a perfectly ‘serious’ issue by making a joke.

Humour is an important strategy when lecturing with a large class, as well. In some ways, the skills demonstrated by stand-up comedians could be seen as a pretty fair fit with those required of lecturers in the university setting–keeping the attention of a large audience for a couple of hours without them being distracted, in such a way that afterwards they somehow remember what you talked about. Those skills are applicable across boundaries. And just as many professors make jokes about their academic material, many of the best comedians have a serious point driving their work.

Two of my favourite performers of stand-up comedy are Bill Bailey and Dylan Moran. Like all successful stand-ups, Bailey (who is English) and Moran (Irish) have ‘trademark’ on-stage styles. From Moran’s shows, what strikes me in terms of applicability to teaching are his uses of narrative, creative language, and vocal modulation. In this clip, he discusses the idea of having untapped personal “potential”: “leave [it] absolutely alone”, he advises, before launching into a lengthy, fantastically detailed description of what you imagine your potential to be (“flamingos serving drinks”)–as opposed to what it actually is. Like the best lectures, this performance is impossible to re-create through quotes alone because Moran’s style is the greater part of what makes the material funny and engaging.

Bill Bailey, on the other hand, has a way of soliciting responses from the audience and incorporating them into his act; he also takes slight in-the-moment thoughts and accidental slips and turns them into commentary and productive tangents. In one section of his show “Part Troll”, he involves the audience in making the sound of “a giant breaking a twig“, then invites them to shout out the names of famous vegetarians (which he re-imagines as a horse-race). Bailey has a knack for creatively incorporating the unexpected into his ‘act’, in ways that generate relevant connections without losing the overall ‘thread’. I think this translates as an important classroom skill because it can help to involve students in a discussion, if we can relate their contributions, their experiences and examples, to a theme that’s part of the course–without ‘losing’ the point at hand.

I don’t consider teaching to be all ‘performance’–and not all humour is helpful or appropriate in the classroom. But after watching so many tedious, montonous lectures in which students (in some ways justifiably) tuned out of the course and in to their iPhones and laptops, I’ve developed an appreciation for presentation–and I’ll take my role models where I can find them-!

Creative thinking

I’m fascinated with the idea of “creativity” and I have been for a long time, probably because I started out in the fine arts (and spent 2 years working on a BFA). However, I find I don’t identify much with the way creativity is so frequently discussed in economic terms. This post was the beginning of some thoughts on the issue. Here is the original link, from November 2, 2010: Creative thinking.

Lately, I’ve been thinking more about the nature of “creativity” or what it means to “be creative”–probably because there’s been an increasing amount of conversation about education and creativity, relating these things to the development of solutions to pressing social, economic and ethical problems.

One of the reasons I find it hard to imagine “teaching creativity” is that I’ve never notbeen “creative” myself. I’ve always been one of those people who was labelled as such fairly early in life, and in some ways that’s made it harder for me to form an impression of creativity beyond the ways in which people tend to apply the term to me. I think the labelling also highlights the way that some talents (such as my ability to draw and paint) are associated with creativity, while others (a gift for numbers) might not be.

Another reason I find it hard to think about teaching creativity is that I still haven’t seen a convincing working definition of the term. My own definition, as far as I can think of one, would involve primarily three things:

Critical questions: It’s hard to be creative if you just accept what is already “there”, without thinking. Being critical is not just about identifying problems (for example), it’s also a process of questioning the assumptions underlying the problems and assessing the worth of various potential solutions.

Imagination: Criticism turns to nihilism or stagnation when one cannot “imagine” a solution. We need to be able to see the possibility of another way of doing things, beyond what’s immediately evident.

Knowledge and understanding: You cannot do something new and inventive and helpful, or imagine a possibility and bring it to fruition, or make reasonable judgments, when you don’t have a good knowledge base and an understanding of the tools available. This is the case whether you’re a ceramicist trying to determine the appropriate kiln temperature for a glaze firing or a policy-maker analysing the various options available for financing social services.

It matters how these terms are used, how words like “creativity” are defined, because of the salience of the concept in current political and economic discourse–in particular its perceived relevance to the much-theorised “knowledge economy”. What kind of policy proposals will be put forth in an effort to increase “creativity”? On what assumptions will these suggestions be based?

Much of the time “creativity” being slotted into a kind of ideal trajectory of (economic) development, one that involves innovation, entrepreneurialism, economic efficiency and productivity, and national competitiveness (a good example of this is the analysis from Richard Florida, who has popularised the term “creative class” and whose work focusses on the economic benefits of creative work).

This means that there’s likely to be a preferred definition of creativity, one that fits with the trajectory–an ideal “creativity” that produces economic competitiveness as its ultimate outcome. In this case, what comes first?–policy or the definition of “creativity”?

All this is important for education policy because creativity is often linked to the public discussion about the “failure” of schools. Education, which has so often been treated as social engineering, is imagined as the best way to retool the workforce (human capital) for an “innovative” economy.

A useful example of this approach is that of Sir Ken Robinson, a prominent lecturer and consultant whose well-known talk for TED is a celebration of the inherent creativity of small children and an analysis of how the school system destroys said innate creativity.

In another video, Robinson argues that creativity can be assessed. How? By assuming a particular definition. Creativity is “not an abstraction–to be creative you have to be doing something.” So Robinson defines creativity as “a practical process of making something”, the “process of having original ideas that have value.” Originality points to the emphasis on newness and innovation, while value assumes the possibility of assessment; creativity can be assessed through determining the field and employing clear criteria that are relevant to that field. Robinson also stresses that assessment is both a description and a comparison of creative work.

I wrote out my own definition before listening to Robinson’s talk. I think it’s interesting that while he describes creativity as a “process”, he seems to be concerned primarily with the outcome of the process (“ideas that have value”). He also doesn’t delve into the ways in which different kinds of knowledge are valued differently, and how even within fields, ideas do not exist within a kind of meritocratic marketplace. Comparison and assessment are fundamental to the market as a mechanism of governance, so one could argue that Robinson’s emphasis reflects an economic basis for the concern with what children “produce” at school. It also feeds into a decades-old discourse of criticism of public school systems, one that has been notoriously unhelpful in producing better schools.

In coming up with a definition for “creativity”, I think we need to ask within what system of valuation “creativity” exists–and the ways that system affects how creativity is thought about and defined. What kinds of “creativity” are seen as appropriate, productive? And what does it mean for education when a constant public discourse of critique takes up such nebulous, catchy/catchall terms, which are in turn mobilised and reified in specific forms through policy debates (such as those occurring currently in the United States)?

The down-side of technology? – On class time

Considering the distractions in which students indulge while in class, many of which involve smartphones and laptops: what’s the answer to dealing with a situation where students are more engaged with their friends online than with others in the same room? How different is this from the distractions of the past, before iPhone and Blackberry? Is this about technology, teaching, both or neither? And how should we deal with it “in the moment”, in the classroom? Here is the original link, from October 13, 2010: The down-side of technology? – On class time.

I want to raise a topic that of course has no easy answers, but which has been coming up quite a bit recently in my job as a teaching assistant for a lecture class of about 100 students. I know many others have discussed this too, so I’m just adding another thread to the long conversation.

Last week in class–in the lecture right before the tutorial I teach–I sat in the back row, as is now my habit, and a fellow TA sat next to me. In the second half of this particular class there was a film being shown. During the film, some students chatted, other used their computers to look at Facebook or other popular sites, and/or to chat online with friends (this they do every class), and hardly any of them took notes even though the film’s content will be on the exam. From where we were seated, we could also see many students thoroughly tuned in to their mobile devices (Blackberrys, iPhones etc.).

The main reason that we were paying attention to this is that the instructor had asked the students not to use Facebook during lecture. Her reasoning, simplified, is that while it’s more or less each student’s personal choice whether or not to engage with the class (student responsibility), other students might be distracted by your Facebooking activity–so it is about respect for one’s classmates, as well.

However, this logic has failed; in our class, it’s not unusual to see students wearing their ear buds during lecture and watching videos on their laptops.

After last week’s class we (the course director and TAs) had a discussion over email about how to handle the students’ use of these technologies in the classroom. The question is both a pedagogical and a pragmatic one: what model of learning underlies our reaction to the students’ “offtask behaviour“, what will the reaction be? What is the next step forward from the argument about “respect” (such a painful position to abandon)?

To me this is not really an issue about the technology per se. After all, when students had only a pen and paper they could still indulge in the habits of doodling or daydreaming or writing and passing notes (as pointed out by this author). In our class, private conversations happen during lecture and there is laughter at inappropriate moments, showing that students either weren’t listening or didn’t care about what was being said. It’s not that new technologies create rudeness or boredom; they just hugely expand the range of distractions in which students can engage, and they do it in a way that’s difficult to censure explicitly (you can’t take away a student’s mobile phone).

Not only is technology not the only “culprit”–it’s also not the case that all students who use Facebook or surf the web are “tuned out” of class; they may be looking up something related to the course, for example, or otherwise using technology to add to their learning experience. Pedagogically, there are many ways for instructors to make use of technology in the classroom–but I think it can only happen when students are already interested and motivated, and keen to interact in class.

A well-known example is that of a professor in the United States who collaborated with a class to create this video, one in which certain relevant points about technology and education are conveniently highlighted–even as students are engaging actively in the solution to their own problems (more info and discussion here). The video “went viral” on YouTube–providing a great demonstration of students and faculty engaging with the world “beyond” the university and doing it through making their own media content.

How can we create this kind of engagement, which has to come from students, not just from professors? How do we convey the “rules of the game”, which require student participation, without being forceful, pedantic or dictatorial, without fostering resentment? It seems strange to ask students to participate in their own education.

I’m still a student myself–and I know I need to bring something to the educational equation (interest, energy, effort, attention, a desire to learn, a degree of self-discipline) or the result will be negative. There must be a balance of responsibility, between what the professor or teacher does–what the university provides–and what students need to do for themselves. Consumerist attitudes towards education (encouraged by high tuition fees) and the imperative to “edutainment” are skewing this balance as a marketised, customer-service model becomes more the norm at universities; yet so often in the past it has been slumped too far towards the weighty dictates of the institution alone.

As someone teaching–even as a lowly tutorial leader–my observation is that practices of “dealing with” changing student attitudes often happens through a kind of informed yet haphazard, everyday decision-making, through experiential negotiation of the common ground shared by ethics and praxis, driven by a need to act in the immediate present, to be proficient at teaching in a classroom. The loss of students’ attention feels like failure of a kind, but what does one have to do in order to “succeed”?

And so to return to the immediate problem, what should my colleagues and I do about our “classroom management” troubles? Should technology such as laptops or wireless Internet access be banned outright from the classroom? Such tactics feel paternalistic. Are there other ways of working with students to create a better environment for interaction and learning, such as making rules and setting parameters? What about when students don’t want to work–how do we walk the peculiar line between exercising “authority” and asking people to exercise authority over themselves?

Technology and research, part 2.

This is the second of two posts; it’s about using Twitter as a research tool, which at the time I incorporated with del.icio.us but which I now use with Diigo, and Chrome instead of Firefox. Here is the original link from October 24th, 2010: Technology and research, part 2: tweeting and blogging.

Continuing my little discussion of the ways in which I’ve most recently been using online technologies in my daily research and writing habits, today I’m moving on to the complementary combo of Twitter and Blogger.

Since one of my goals over the past six to eight months has been to interact more with people who share my research/academic interests (outside of my graduate program), I’ve been doing more social media exploration than usual. A relatively recent major change to my online habits has been my increasing use of Twitter as a way of connecting with strangers and keeping up with news.

I operate with a kind of minimalism when it comes to technological tools–as I mentioned in a previous post, I tend to want only the tools I need, and only the tools that work. It’s for that reason that I (and others) didn’t start using Twitter until quite a while after I first looked at the site and logged on to create an account. I simply couldn’t see any point; like so many people, at first I thought of Twitter as a useless stream of trivial chatter that would only further clutter my already-limited field of attention.

In spite of my own skepticism, at some point earlier this year I decided to try “tweeting” a bit more in earnest. Since that time I’ve decided that there are “two Twitters”: the banal barrage of idiotic celebrity gossip and predictably dreary/melodramatic personal updates, yes, that Twitter does exist (of course!). But the flip side of it is a fascinating and wide-reaching series of exchanges, often with people I’d never have encountered otherwise; it’s a stream of useful news and links that I couldn’t possibly have rounded up on my own; and it’s a means of responding to those things, and sharing my own, in such a way that the conversation continues and expands.

But it does take time to learn how to use Twitter effectively as a tool–assuming you know what you want to accomplish with it. At first, without a list of “followers” and with no sense of who else was using this tool and what they might be doing, I felt as if I was sending messages into the aether with little idea of “audience”, tone, or purpose. Fortunately I had a few friends already tweeting busily, who helped set an example for me in terms of Twitterquette.

Among the more important things I learned was that while it’s more or less true that the more accounts you add to your own list, the more “followers” you’re likely to gain, the best way to get the most out of Twitter is by participating actively. For example, a means of navigating Twitter is through using “hashtags”, or words/terms attached to a tweet with a # sign: e.g. #CdnPSE for “Canadian post-secondary education”. You can “meet” other followers by using tags, and interact with them by “replying” to their tweets or by “re-tweeting” them (passing their content around). A system of crediting others is integral to all this; another aspect is that of suggesting users to other users (often with the tag “FollowFriday or #FF). I found that one of the biggest challenges here was feeling confident to interact with strangers, but once I was over that hurdle things became much more rewarding.

To sum up: I like using Twitter because it affords a form of participation in an ongoing conversation, but it’s one that isn’t limited to–for example–my Facebook contacts, who are an entirely different group. While on Facebook I keep things generally quite private, on Twitter I’m happy to see strangers adding me to their lists–unless they’re bots or marketers. (Now the only thing I can’t find, or haven’t found yet, is the perfect Twitter client. But that’s a whole different blog post…)

Tweeting got just a little bit easier a couple of months ago when del.icio.us (as mentioned in the previous post) also linked to the site, so now you can bookmark, tag, and send a link to Twitter–with a comment–all in the same pop-up window within your web browser (for Firefox, anyway). The other way I access the daily news is through Google Reader, so now I have a Reader–>del.icio.us–>Twitter process that works pretty well for finding and reading relevant news, saving articles for later, and sharing them with people who are likely to want to read them.

And lastly, there’s the blog. Even as an ex-zinester I’ve never felt comfortable writing blogs; the required regularity felt somehow journal-like, and I’m terrible at keeping journals. So I began, in fact, with a photo-blog that was at first a daily affair but eventually became weekly as the posts grew longer and often incorporated multiple pictures. A year later, after I’d managed to maintain Panoptikal and even pick up a few “viewers”, I decided to incorporate my academic interests and my new Twitter habit by starting an education-oriented blog (the one you’re currently reading), with the goal of practicing writing outside a formal academic context.

I’ve found that the blog is a great place to say something shorter and less formal than I would in an academic paper or presentation. It’s a place to brainstorm without pressure, a venue for painting a small picture of my own views and for developing them further, and conversing with others about the issues raised. It’s also something expressly public, so it’s accessible for those who can’t view journal articles or even private web sites where such conversations might happen in a more regulated environment (for example, Facebook). For anyone considering becoming an academic, the public nature of blogs can be a means of reaching a broader audience, of “engaging” multiple publics in the conversation about your research–and seeing immediate commentary. To keep building on that conversation, I embedded my Twitter feed and a list of links from del.icio.us into the blog’s format.

At this stage you may be thinking–this sounds like a lot of effort; what’s the point of all this reading and commenting and tweeting? The interesting thing is that I wasn’t sure myself, for quite some time, why I was “doing all this”. But I got more of an idea this past Friday when I got to sit in on a workshop run by Alex Sévigny, a friend who also happens to be a successful professor, a professional communicator, and a prolific blogger and social media buff.

The overall event, organised by Hamilton’s Cossart Exchange, was ostensibly for graduate students who are interested in developing non-academic careers. But I think Alex’s message was valid beyond its immediate context. His point was that for those people operating outside of existing/rigid employment structures, the process of “self-branding” (as unpleasant as it may sound) has become an integral part of professional success. Before social media, this was more difficult; but now that so many of us have access to social media tools, the opportunities have expanded dramatically. Development of an online “identity” or “face” helps you to make yourself known to potential employers and collaborators, and helps you connect better with those you’ve already met.

So it turns out that maybe there has been a use for all my blogging and tweeting, one beyond the immediate gratification of chatting with strangers about the things that interest me most. And here’s the lesson for grad students: so many of us are spending too much time online anyway, we should really learn how to channel those efforts and make them count towards career-building (!).

Technology and research, part 1.

This is the first of two posts, and focusses on Delicious (which at the time was called del.icio.us), a social bookmarking tool that I used to use but which I’ve now replaced with Diigo. I switched sites when the original version was sold off and it was unclear if or when the site would be usable again. Here is the original link, from October 7, 2010: Technology and research, part 1.

Perhaps it’s my background in visual art that makes me more prone to this, but for much of my life I’ve been suffering from pack-rat-itis. For example, I still maintain (though adding less to it now) my large collection of clipped images and texts from magazines and other paper publications. I keep a stash of various art supplies and a stocked “toolbox” with everything from string to copper wire to paintbrushes and tape measures. I’ve acquired a collection of notebooks and sketchbooks over the years and I keep these as well, as records and notes about ideas and projects both finished and unfinished.

And yet there’s a sort of competing tendency that keeps things in check: I’m also one of those people who loves the storage and organization section of IKEA, because I like the thought of keeping practical items handy in such a way that I can easily reach them and use them. I hate having mounds of stuff and no way to do anything with it; I dislike even receiving gifts if they have no useful purpose and simply require “storage” (sitting on a shelf). I don’t even see the point of having two of the same kind of screwdriver. Periodically I “purge” my supplies (usually when I move house) to make sure I’m not holding on to anything completely useless. My need for workable space may occasionally collide with the squirrelly tendency, but usually the one cancels out the other.

These habits have been transferred, now, to the work I do researching for my dissertation and other projects. Not only do I stash books and papers; my computer “desktop” itself has become a version of the way I’d probably organise my apartment if it were possible–everything is kept filed away, labelled clearly and in embedded folders, but everything is kept. And I’m finally at the stage where this habit is starting to pay off: I have a searchable library of notes and PDF files to which I can refer while working on the next phase of my dissertation. It looks slightly over-done to the casual observer, but then what is academic work if not retentive?

The latest manifestation of all this, and one that has become like a third arm to me when it comes to online research, is the social bookmarking tool del.icio.us. This little slice of magic won me over when I realised that all my current, browser based bookmarks–which couldn’t be accessed from multiple computers–could be a) uploaded with minimal effort and b) tagged (categorised and labelled with key words), by me, in such a way that they would become useful.

Not only is del.icio.us a powerful tool for sharing things with others and seeing what others are reading; it is–more important to me–a means of creating a personal database of web-based content, accessible from any computer I happen to be using. Why is this desirable? Because I view the web as a major part of my research process, not only in terms of finding the materials I need (books, journal articles, etc.) and connecting with new people (including academics, writers, politicians and policy-makers) but also as a one-stop supersource for media content and information/commentary on current events–crucial to my interest in universities, post-secondary education, politics and policy, and the ways in which ideas about these things circulate discursively.

del.icio.us also has some pretty desirable features that make it easy to incorporate into my daily news-reading habits. As I mentioned above, existing browser-based bookmarks can be imported, saving a lot of duplicated effort (I was able to use about 4 years’ worth of saved links). There is also an extension integrating del.icio.us into your (Firefox) browser, so that clicking on a single button allows you to tag and comment on something before saving it to your account; the same extension allows you to search existing tags in a side-bar. The list of PSE links at the left-hand side of this blog page is channelled to Blogger from del.icio.us as well, showing only those recent links tagged as relating to PSE. As you can tell, the tagging system is key to the usefulness of del.icio.us, and I soon developed my own strategy for maximising the usefulness of tagging.

And while all this seems like a lot of work, it really isn’t–compared to the ways in which it’s paying off. During the York University strike over 2008-2009, I tagged/bookmarked over 300 news items–press releases, articles and blog posts–which I was able to use later for a media analysis that became a conference presentation. I’ve saved clusters of articles on a series of specific themes that will work as media case studies in the future (possibly for publications); one of these I’ve already used in a class lecture on Critical Discourse Analysis. And then there’s the usefulness of simply being able to access “that article” that you read two months ago, the one about gender and accessibility and women’s pay (for example), and bring it in to class or into a paper or blog post or–you name it. I see this not only as a way of keeping up to date with current developments in the “field”, but also as a means of enriching what I’m writing by referencing a more diverse array of sources.

del.icio.us is one of those Web 2.0 tools that makes me feel blessed to be researching in the Internet Era. And, I admit, it’s also just a teeny bit enjoyable to be able to justify my storage and organization “habit” (hobby? Obsession?) as a means of actually advancing/enhancing my own research work.

 

Places of learning

I’ve always been very picky about physical spaces, so it’s no surprise my first post for University of Venus blog at Inside Higher Ed was about the architecture and spatial arrangements in universities, and what they tell us about how we believe education should happen. Here’s a link to the original post from October 5th, 2010: Places of learning.

I’ve always felt that the physical environment of educational institutions — their colours, their spaces, their architecture — is one of the least-considered elements in the constellation of educational “success factors,” though possibly the most pervasive one.

Take, for example, the graduate program in which I’m currently completing my PhD. Just before I began my degree, the Faculty of Education—in which my program is housed—was moved from a concrete tower in the centre of campus to a newly-renovated college building. This seemed like a fine plan; however, it wasn’t long after joining the program that I realized the re-design had been a failure. While the Pre-Service Department was housed on the airy, welcoming ground floor, the graduate students’ space, consisting primarily of a computer lab, was relegated to the basement. This separated the grad students from the Graduate Program office and faculty—who were now sequestered on the second floor.

You might be wondering: other than the inconvenience of stair-climbing, what’s wrong with this arrangement? Everyone is housed in the same building, at least, and it looks clean and efficient thanks to the renovation job.

The first problem is that while grad students can probably work in almost any room with a computer, housing them in the basement—which is referred to as “The Dungeon” by some program members—is a poor choice because they will spend more time in this room than most other students will spend on the ground floor. Providing a pleasant working environment means more people will use the lab facilities, and it gives grad students an additional reason to come to the department from off-campus. At a large and isolated commuter campus like ours, this is important, because it helps to create a communal environment and to foster the social and peer support that is so vital to graduate student success.

The second problem relates to the same issue: physically separating faculty members from graduate students makes it more difficult for students to have informal, serendipitous and social contact with professors. So assigning graduate student space to the basement, in a room which is well-equipped but sterile and detached, means adding distance to the existing (non-physical) chasm that often separates students from faculty. Not that the faculty space is well-designed either—it’s standard academic architecture, a loop of corridor lined on each side with offices, following the shape of the building. Most of the office doors are closed.

Part of keeping students in a program, keeping them “engaged” with classes and faculty and other students, involves creating a space where they can feel welcome and included. I feel strongly that educational architecture—the “place” of education—contributes to the kind of educational experience we have, from grade school all the way to the doctoral degree. Institutional architecture sends a message, and affects messages sent; it expresses an idea about the function of the environment it helps create. In the documentary How Buildings Learn, Stewart Brand suggests that while buildings may indeed “learn,” people also learn from buildings; our practices and habits, even our feelings, are shaped by our environments—and thus so is the work we do within them.

Amid the current cuts and crises in higher education, it may sound trite to offer this kind of critique. But with graduate school attrition generally hovering around 50%, universities should be taking more seriously the research about what helps students adapt to university life and to academic culture. The effects of physical space are very real. I think it’s no coincidence that in our program, students often find it difficult to “meet” a supervisor. After all, there are few real in-person opportunities to do so, outside of planned events and the classroom—relatively formal occasions.

While we can’t necessarily change the buildings we’re in, we can be sensitive to their use, to our adaptation to the context provided. And we can ask ourselves questions. What would the building look like if we began by asking how people learn? How do people meet each other and form learning relationships? If you could design your own workspace, your own learning space, what would it look like and why? This need not involve a major reconstruction project. If the university had taken these things into account before renovating our program space, the same amount could have been spent and things might have looked, and felt, very different.

Writing it out

Link to the original post from October 4th, 2010: Writing it out.

At the risk of drifting into the Dull Squalid Waters of Graduate Student Angst, today I’m going to talk about writer’s block–possibly as a means of getting around it. Now that’s creative! 😉

In my case, getting stuck on process is something that often comes from insecurity, a fear of “acting” and “just getting things done”; so I’ve tried to work at my own writing strategies over the years. But this kind of detailed thinking-through and development of self-knowledge isn’t necessarily something we see being explored in graduate school (for various reasons–see my previous posts about related issues), possibly because writing help and development are often assumed to happen during the student’s coursework (unless there are no courses) or at the university writing centre. It may even be assumed that students should have learned how to write during their undergraduate studies, or that they “had to know how to write” to get in to grad school. Yet I’ve had numerous professors tell me that writing skills are a major problem even at the graduate level (where a whole new level of writing is required).

I was recently helping a friend, who is an M.Ed student and a good writer, to prepare a grant application–and I noticed that his draft had been re-written by one of his profs (rather than merely edited). I could tell from the language she’d used, compared to previous drafts he’d written; and because the language had changed, so had the project–into something he hadn’t really “framed” himself.

As we went over this new, re-written draft, I helped him to replace language that seemed inappropriate by asking about the ideas behind, and impressions conveyed by, the words; we also “broke up” the seemingly polished structure of the writing by cutting, pasting, rearranging, and adding in points with no concern for cosmetic editing. We pulled out the issues that seemed to be central and made a list, starting over with a new structure and concentrating on telling a coherent “story” about the project.

It felt as if the real focus kept getting lost in all the ideas that were floating around–that was half the problem. But the real trouble for my friend was even more basic–he had been told to write something in a completely new genre, and offered almost no guidance. With many thousands of dollars’ worth of grant money at stake (the Ontario Graduate Scholarship is worth $15,000 for a year, and Tri-Council grants offer more), writing had suddenly taken on a new and immediate importance, and there was little appropriate help to be found from professors swamped by similarly panicked grad students (a good number of whom have never heard of a “research grant” before their first year of PhD).

In the end it wasn’t due to my teaching skills that we ended up making progress (if we did)–far from it, I’d never done this kind of work in my life and I had to think: how does one write? How do I write? After all, I was pretty much the only model I had to go on. I had never really thought about that uncomfortable process outside of trying to enact it somehow, as contradictory as it sounds. My friends don’t usually discuss how they write, though they frequently bemoan the difficulty of it. I’d helped students with writing before, but there had never been time or space for such in-depth consideration. So the struggle for me was one of translation and negotiation, and fortunately what I did have was some experience with producing grant proposals.

This only made me think more about my own, current editing tasks–my dissertation writing and the papers I’d like to see published, in particular. I recently was forced to consider how much my process must have changed over time, when I was revising a paper written during one of my MA courses. The paper lacked the structure I would have given it if I had written it more recently–indeed, I’m currently re-ordering the entire thing such that the reader isn’t expected to plough through the textual equivalent of an army obstacle course. My more recent writing is evidently more well-planned, as the other papers showed, but work from just 18 months ago still seems littered with tentative statements and unnecessary words, begging for a linguistic pruning.

And yet I can’t remember ever having been told anything about these things–ever really learning them–other than perhaps by osmosis. This gives me some faith in the concept of a kind of gradual improvement with time and practice; but I still think it’s the self-reflexive process of working with other people that brings real perspective and the motivation to actually consider one’s habits and tendencies in more depth, with an eye to doing better (writing) work, and to working better overall.

The proof of the pudding

Link to the original post from September 21, 2010: The proof of the pudding.

Throughout the first few weeks of September, we’ve seen a number of reports released, both in the U.S. and Canada, discussing and describing (quantitatively) the positive outcomes that students generate from obtaining university credentials. These reports have appeared at roughly the same time as the international university “rankings“, which were unleashed around the middle of the month–along with OECD education indicators and Statistics Canada reports on tuition fees and national education.

The strategy here seems straightforward enough; after all, at the beginning of the school year, it’s not primarily students but rather their parents–in many cases–who are concerned about whether the college or university experience is going to be “worth the investment“. (I would argue that the parents should also look to their own departing children if they want to know the answer to that question-!) It’s a great time to capture an audience for the debate, since students beginning their last year of high school at this time (most of them still living at home) will also be searching for relevant information about possible PSE options.

These articles are reports stir up the debate about public vs. private funding of PSE, about the rising proportion of university revenue generated by tuition from students and families, and the cost to the state of educational expansion. They also pitch university education primarily in terms of its economic value–not only to individuals, but also to the state (since educated people are “human capital”). Education correlates with increased income over one’s lifetime, with better health (saving taxpayer dollars), and with inter-generational class mobility. These arguments, along with those citing tough times for the government purse, are frequently used to support a pro-tuition-increase position both in the media and in policy debates.

All these points may seem valid enough until we consider the fact that while students may all technically pay the same amount in tuition (say, at a given university or in a particular program), they don’t all receive the same “product”. And universities generally advertise to them as if the same product is really on offer to everyone. Which it certainly isn’t–the costs alone (which exceed tuition) are borne in entirely different ways by different students, a point briefly raised by Charles Miller as quoted in this article. If my parents pay for my tuition and living expenses, then what costs am I absorbing over the period of a 4-year undergraduate degree? How does this compare to a situation without parental support? Low-income students are less likely to have family help and more likely to take on a large debt burden; they are less likely to have savings accounts and personal investments, less likely to be able to purchase cars and condos when their student days are done.

Aside from the variation in economic circumstance, students also bring differences in academic ability and social and cultural capital to their degrees, which means that development differs for each person and so does their overall capacity for career-building.

Not only does university have different “costs” for different people; it also has highly variable outcomes. Some students will land solid jobs and find themselves upwardly mobile after completing a bachelor’s degree. Others may continue to a Master’s or even a PhD and discover that gainful employment impossible to find, for a variety of reasons. There’s also the question of whether students obtain jobs in their chosen fields–or within a particular income range, for that matter. And once they do find employment, earnings differences by gender (for example) still persist to the extent that women in Canada still earn significantly less than what male employees take home for equivalent work.

Another form of quantitative justification, the rankings game is an attempt to make the intangible–the “quality” of education, or of the institution–into a measurable, manipulable object. Part of the yearly ritual is the predictable squabble over methodology, which generates much commentary and debate, particularly from those institutions that have found themselves dropping in the international league tables. This quibbling seems ironic given that all the rankings are embedded in the same general global system of numeric calculation, one that feeds international competition and now constitutes and entire industry that rides on the backs of already overburdened and under-funded university systems. While the public may rail against the supposed over-compensation of tenured professors (salaries represent the universities’ biggest cost), institutions continue to engage in the international numbers game, pumping money into the yearly production of “free” data that are then made inaccessible by the ranking organizations (who profit from their use).

Education reports, with their quantitative indicators of the economics “benefits” of higher education, are a part of the same overall tendency to assess, to compare, to normalize and standardize. Earnings-related numbers often provide rhetorical support for policy agendas that involve higher tuition fees, since proving the “private” benefits of education means that we can charge the user or “consumer” of education for access to these (eventual) benefits.

Rankings and statistics serve as a means of informing risk assessment–for governments, when funding is increasingly based on “performance”, and for students, when it’s about choosing the “better” university. But no numbers can truly gauge or alter the inherent risk of education and knowledge, the ineffability of the paths we take to discovery, the serendipities of fortune and temperament that can lead one person to the gutter while another may hit the heights of achievement. Students have moments of inspiration, they meet undistinguished professors who never publish but turn lives around. They form unexpected friendships and stumble on opportunities, skewer themselves on pitfalls both obvious and unseen.

In other words we cannot ac/count for this most joyful and painful side of our educative experience–the unknown element which is frequently the most formative one; and the more we attempt to inject certainty into this process, the more we set ourselves up for disappointment. This doesn’t mean there’s no use for numbers, for evaluations and assessments, for attempts to improve our universities. But sensible decision-making, whether by students or by governments, will always involve more than a measurement.