Creative thinking

I’m fascinated with the idea of “creativity” and I have been for a long time, probably because I started out in the fine arts (and spent 2 years working on a BFA). However, I find I don’t identify much with the way creativity is so frequently discussed in economic terms. This post was the beginning of some thoughts on the issue. Here is the original link, from November 2, 2010: Creative thinking.

Lately, I’ve been thinking more about the nature of “creativity” or what it means to “be creative”–probably because there’s been an increasing amount of conversation about education and creativity, relating these things to the development of solutions to pressing social, economic and ethical problems.

One of the reasons I find it hard to imagine “teaching creativity” is that I’ve never notbeen “creative” myself. I’ve always been one of those people who was labelled as such fairly early in life, and in some ways that’s made it harder for me to form an impression of creativity beyond the ways in which people tend to apply the term to me. I think the labelling also highlights the way that some talents (such as my ability to draw and paint) are associated with creativity, while others (a gift for numbers) might not be.

Another reason I find it hard to think about teaching creativity is that I still haven’t seen a convincing working definition of the term. My own definition, as far as I can think of one, would involve primarily three things:

Critical questions: It’s hard to be creative if you just accept what is already “there”, without thinking. Being critical is not just about identifying problems (for example), it’s also a process of questioning the assumptions underlying the problems and assessing the worth of various potential solutions.

Imagination: Criticism turns to nihilism or stagnation when one cannot “imagine” a solution. We need to be able to see the possibility of another way of doing things, beyond what’s immediately evident.

Knowledge and understanding: You cannot do something new and inventive and helpful, or imagine a possibility and bring it to fruition, or make reasonable judgments, when you don’t have a good knowledge base and an understanding of the tools available. This is the case whether you’re a ceramicist trying to determine the appropriate kiln temperature for a glaze firing or a policy-maker analysing the various options available for financing social services.

It matters how these terms are used, how words like “creativity” are defined, because of the salience of the concept in current political and economic discourse–in particular its perceived relevance to the much-theorised “knowledge economy”. What kind of policy proposals will be put forth in an effort to increase “creativity”? On what assumptions will these suggestions be based?

Much of the time “creativity” being slotted into a kind of ideal trajectory of (economic) development, one that involves innovation, entrepreneurialism, economic efficiency and productivity, and national competitiveness (a good example of this is the analysis from Richard Florida, who has popularised the term “creative class” and whose work focusses on the economic benefits of creative work).

This means that there’s likely to be a preferred definition of creativity, one that fits with the trajectory–an ideal “creativity” that produces economic competitiveness as its ultimate outcome. In this case, what comes first?–policy or the definition of “creativity”?

All this is important for education policy because creativity is often linked to the public discussion about the “failure” of schools. Education, which has so often been treated as social engineering, is imagined as the best way to retool the workforce (human capital) for an “innovative” economy.

A useful example of this approach is that of Sir Ken Robinson, a prominent lecturer and consultant whose well-known talk for TED is a celebration of the inherent creativity of small children and an analysis of how the school system destroys said innate creativity.

In another video, Robinson argues that creativity can be assessed. How? By assuming a particular definition. Creativity is “not an abstraction–to be creative you have to be doing something.” So Robinson defines creativity as “a practical process of making something”, the “process of having original ideas that have value.” Originality points to the emphasis on newness and innovation, while value assumes the possibility of assessment; creativity can be assessed through determining the field and employing clear criteria that are relevant to that field. Robinson also stresses that assessment is both a description and a comparison of creative work.

I wrote out my own definition before listening to Robinson’s talk. I think it’s interesting that while he describes creativity as a “process”, he seems to be concerned primarily with the outcome of the process (“ideas that have value”). He also doesn’t delve into the ways in which different kinds of knowledge are valued differently, and how even within fields, ideas do not exist within a kind of meritocratic marketplace. Comparison and assessment are fundamental to the market as a mechanism of governance, so one could argue that Robinson’s emphasis reflects an economic basis for the concern with what children “produce” at school. It also feeds into a decades-old discourse of criticism of public school systems, one that has been notoriously unhelpful in producing better schools.

In coming up with a definition for “creativity”, I think we need to ask within what system of valuation “creativity” exists–and the ways that system affects how creativity is thought about and defined. What kinds of “creativity” are seen as appropriate, productive? And what does it mean for education when a constant public discourse of critique takes up such nebulous, catchy/catchall terms, which are in turn mobilised and reified in specific forms through policy debates (such as those occurring currently in the United States)?

The down-side of technology? – On class time

Considering the distractions in which students indulge while in class, many of which involve smartphones and laptops: what’s the answer to dealing with a situation where students are more engaged with their friends online than with others in the same room? How different is this from the distractions of the past, before iPhone and Blackberry? Is this about technology, teaching, both or neither? And how should we deal with it “in the moment”, in the classroom? Here is the original link, from October 13, 2010: The down-side of technology? – On class time.

I want to raise a topic that of course has no easy answers, but which has been coming up quite a bit recently in my job as a teaching assistant for a lecture class of about 100 students. I know many others have discussed this too, so I’m just adding another thread to the long conversation.

Last week in class–in the lecture right before the tutorial I teach–I sat in the back row, as is now my habit, and a fellow TA sat next to me. In the second half of this particular class there was a film being shown. During the film, some students chatted, other used their computers to look at Facebook or other popular sites, and/or to chat online with friends (this they do every class), and hardly any of them took notes even though the film’s content will be on the exam. From where we were seated, we could also see many students thoroughly tuned in to their mobile devices (Blackberrys, iPhones etc.).

The main reason that we were paying attention to this is that the instructor had asked the students not to use Facebook during lecture. Her reasoning, simplified, is that while it’s more or less each student’s personal choice whether or not to engage with the class (student responsibility), other students might be distracted by your Facebooking activity–so it is about respect for one’s classmates, as well.

However, this logic has failed; in our class, it’s not unusual to see students wearing their ear buds during lecture and watching videos on their laptops.

After last week’s class we (the course director and TAs) had a discussion over email about how to handle the students’ use of these technologies in the classroom. The question is both a pedagogical and a pragmatic one: what model of learning underlies our reaction to the students’ “offtask behaviour“, what will the reaction be? What is the next step forward from the argument about “respect” (such a painful position to abandon)?

To me this is not really an issue about the technology per se. After all, when students had only a pen and paper they could still indulge in the habits of doodling or daydreaming or writing and passing notes (as pointed out by this author). In our class, private conversations happen during lecture and there is laughter at inappropriate moments, showing that students either weren’t listening or didn’t care about what was being said. It’s not that new technologies create rudeness or boredom; they just hugely expand the range of distractions in which students can engage, and they do it in a way that’s difficult to censure explicitly (you can’t take away a student’s mobile phone).

Not only is technology not the only “culprit”–it’s also not the case that all students who use Facebook or surf the web are “tuned out” of class; they may be looking up something related to the course, for example, or otherwise using technology to add to their learning experience. Pedagogically, there are many ways for instructors to make use of technology in the classroom–but I think it can only happen when students are already interested and motivated, and keen to interact in class.

A well-known example is that of a professor in the United States who collaborated with a class to create this video, one in which certain relevant points about technology and education are conveniently highlighted–even as students are engaging actively in the solution to their own problems (more info and discussion here). The video “went viral” on YouTube–providing a great demonstration of students and faculty engaging with the world “beyond” the university and doing it through making their own media content.

How can we create this kind of engagement, which has to come from students, not just from professors? How do we convey the “rules of the game”, which require student participation, without being forceful, pedantic or dictatorial, without fostering resentment? It seems strange to ask students to participate in their own education.

I’m still a student myself–and I know I need to bring something to the educational equation (interest, energy, effort, attention, a desire to learn, a degree of self-discipline) or the result will be negative. There must be a balance of responsibility, between what the professor or teacher does–what the university provides–and what students need to do for themselves. Consumerist attitudes towards education (encouraged by high tuition fees) and the imperative to “edutainment” are skewing this balance as a marketised, customer-service model becomes more the norm at universities; yet so often in the past it has been slumped too far towards the weighty dictates of the institution alone.

As someone teaching–even as a lowly tutorial leader–my observation is that practices of “dealing with” changing student attitudes often happens through a kind of informed yet haphazard, everyday decision-making, through experiential negotiation of the common ground shared by ethics and praxis, driven by a need to act in the immediate present, to be proficient at teaching in a classroom. The loss of students’ attention feels like failure of a kind, but what does one have to do in order to “succeed”?

And so to return to the immediate problem, what should my colleagues and I do about our “classroom management” troubles? Should technology such as laptops or wireless Internet access be banned outright from the classroom? Such tactics feel paternalistic. Are there other ways of working with students to create a better environment for interaction and learning, such as making rules and setting parameters? What about when students don’t want to work–how do we walk the peculiar line between exercising “authority” and asking people to exercise authority over themselves?

Places of learning

I’ve always been very picky about physical spaces, so it’s no surprise my first post for University of Venus blog at Inside Higher Ed was about the architecture and spatial arrangements in universities, and what they tell us about how we believe education should happen. Here’s a link to the original post from October 5th, 2010: Places of learning.

I’ve always felt that the physical environment of educational institutions — their colours, their spaces, their architecture — is one of the least-considered elements in the constellation of educational “success factors,” though possibly the most pervasive one.

Take, for example, the graduate program in which I’m currently completing my PhD. Just before I began my degree, the Faculty of Education—in which my program is housed—was moved from a concrete tower in the centre of campus to a newly-renovated college building. This seemed like a fine plan; however, it wasn’t long after joining the program that I realized the re-design had been a failure. While the Pre-Service Department was housed on the airy, welcoming ground floor, the graduate students’ space, consisting primarily of a computer lab, was relegated to the basement. This separated the grad students from the Graduate Program office and faculty—who were now sequestered on the second floor.

You might be wondering: other than the inconvenience of stair-climbing, what’s wrong with this arrangement? Everyone is housed in the same building, at least, and it looks clean and efficient thanks to the renovation job.

The first problem is that while grad students can probably work in almost any room with a computer, housing them in the basement—which is referred to as “The Dungeon” by some program members—is a poor choice because they will spend more time in this room than most other students will spend on the ground floor. Providing a pleasant working environment means more people will use the lab facilities, and it gives grad students an additional reason to come to the department from off-campus. At a large and isolated commuter campus like ours, this is important, because it helps to create a communal environment and to foster the social and peer support that is so vital to graduate student success.

The second problem relates to the same issue: physically separating faculty members from graduate students makes it more difficult for students to have informal, serendipitous and social contact with professors. So assigning graduate student space to the basement, in a room which is well-equipped but sterile and detached, means adding distance to the existing (non-physical) chasm that often separates students from faculty. Not that the faculty space is well-designed either—it’s standard academic architecture, a loop of corridor lined on each side with offices, following the shape of the building. Most of the office doors are closed.

Part of keeping students in a program, keeping them “engaged” with classes and faculty and other students, involves creating a space where they can feel welcome and included. I feel strongly that educational architecture—the “place” of education—contributes to the kind of educational experience we have, from grade school all the way to the doctoral degree. Institutional architecture sends a message, and affects messages sent; it expresses an idea about the function of the environment it helps create. In the documentary How Buildings Learn, Stewart Brand suggests that while buildings may indeed “learn,” people also learn from buildings; our practices and habits, even our feelings, are shaped by our environments—and thus so is the work we do within them.

Amid the current cuts and crises in higher education, it may sound trite to offer this kind of critique. But with graduate school attrition generally hovering around 50%, universities should be taking more seriously the research about what helps students adapt to university life and to academic culture. The effects of physical space are very real. I think it’s no coincidence that in our program, students often find it difficult to “meet” a supervisor. After all, there are few real in-person opportunities to do so, outside of planned events and the classroom—relatively formal occasions.

While we can’t necessarily change the buildings we’re in, we can be sensitive to their use, to our adaptation to the context provided. And we can ask ourselves questions. What would the building look like if we began by asking how people learn? How do people meet each other and form learning relationships? If you could design your own workspace, your own learning space, what would it look like and why? This need not involve a major reconstruction project. If the university had taken these things into account before renovating our program space, the same amount could have been spent and things might have looked, and felt, very different.

Writing it out

Link to the original post from October 4th, 2010: Writing it out.

At the risk of drifting into the Dull Squalid Waters of Graduate Student Angst, today I’m going to talk about writer’s block–possibly as a means of getting around it. Now that’s creative! 😉

In my case, getting stuck on process is something that often comes from insecurity, a fear of “acting” and “just getting things done”; so I’ve tried to work at my own writing strategies over the years. But this kind of detailed thinking-through and development of self-knowledge isn’t necessarily something we see being explored in graduate school (for various reasons–see my previous posts about related issues), possibly because writing help and development are often assumed to happen during the student’s coursework (unless there are no courses) or at the university writing centre. It may even be assumed that students should have learned how to write during their undergraduate studies, or that they “had to know how to write” to get in to grad school. Yet I’ve had numerous professors tell me that writing skills are a major problem even at the graduate level (where a whole new level of writing is required).

I was recently helping a friend, who is an M.Ed student and a good writer, to prepare a grant application–and I noticed that his draft had been re-written by one of his profs (rather than merely edited). I could tell from the language she’d used, compared to previous drafts he’d written; and because the language had changed, so had the project–into something he hadn’t really “framed” himself.

As we went over this new, re-written draft, I helped him to replace language that seemed inappropriate by asking about the ideas behind, and impressions conveyed by, the words; we also “broke up” the seemingly polished structure of the writing by cutting, pasting, rearranging, and adding in points with no concern for cosmetic editing. We pulled out the issues that seemed to be central and made a list, starting over with a new structure and concentrating on telling a coherent “story” about the project.

It felt as if the real focus kept getting lost in all the ideas that were floating around–that was half the problem. But the real trouble for my friend was even more basic–he had been told to write something in a completely new genre, and offered almost no guidance. With many thousands of dollars’ worth of grant money at stake (the Ontario Graduate Scholarship is worth $15,000 for a year, and Tri-Council grants offer more), writing had suddenly taken on a new and immediate importance, and there was little appropriate help to be found from professors swamped by similarly panicked grad students (a good number of whom have never heard of a “research grant” before their first year of PhD).

In the end it wasn’t due to my teaching skills that we ended up making progress (if we did)–far from it, I’d never done this kind of work in my life and I had to think: how does one write? How do I write? After all, I was pretty much the only model I had to go on. I had never really thought about that uncomfortable process outside of trying to enact it somehow, as contradictory as it sounds. My friends don’t usually discuss how they write, though they frequently bemoan the difficulty of it. I’d helped students with writing before, but there had never been time or space for such in-depth consideration. So the struggle for me was one of translation and negotiation, and fortunately what I did have was some experience with producing grant proposals.

This only made me think more about my own, current editing tasks–my dissertation writing and the papers I’d like to see published, in particular. I recently was forced to consider how much my process must have changed over time, when I was revising a paper written during one of my MA courses. The paper lacked the structure I would have given it if I had written it more recently–indeed, I’m currently re-ordering the entire thing such that the reader isn’t expected to plough through the textual equivalent of an army obstacle course. My more recent writing is evidently more well-planned, as the other papers showed, but work from just 18 months ago still seems littered with tentative statements and unnecessary words, begging for a linguistic pruning.

And yet I can’t remember ever having been told anything about these things–ever really learning them–other than perhaps by osmosis. This gives me some faith in the concept of a kind of gradual improvement with time and practice; but I still think it’s the self-reflexive process of working with other people that brings real perspective and the motivation to actually consider one’s habits and tendencies in more depth, with an eye to doing better (writing) work, and to working better overall.