Friday, February 28, 2014

Oxford’s Free Course "Critical Reasoning for Beginners" Will Teach You to Think Like a Philosopher

University of Oxford
University of Oxford (Photo credit: Wikipedia)
by , Open Culture:

When I was younger, I often found myself disagreeing with something I’d read or heard, but couldn’t explain exactly why.

Despite being unable to pinpoint the precise reasons, I had a strong sense that the rules of logic were being violated.

After I was exposed to critical thinking in high school and university, I learned to recognize problematic arguments, whether they be a straw man, an appeal to authority, or an ad hominem attack.

Faulty arguments are all-pervasive, and the mental biases that underlie them pop up in media coverage, college classes, and armchair theorizing.

Want to learn how to avoid them? Look no further than Critical Reasoning For Beginners, the top rated iTunesU collection of lectures led by Oxford University’s Marianne Talbot.

Talbot builds the course from the ground up, and begins by explaining that arguments consist of a set of premises that, logically linked together, lead to a conclusion.

She proceeds to outline the way to lay out an argument logically and clearly, and eventually, the basic steps involved in assessing its strengths and weaknesses.

The six-part series, which was recorded in 2009, shows no sign of wear, and Talbot, unlike some philosophy professors, does a terrific job of making the content digestible.

If you’ve got some time on your hands, the lectures, which average just over an hour in length, can be finished in less than a week. That’s peanuts, if you consider that all our knowledge is built on the foundations that this course establishes.

If you haven’t had the chance to be exposed to a class on critical thought, I can’t recommend Critical Reasoning For Beginners with enough enthusiasm: there are few mental skills that are as underappreciated, and as central to our daily lives, as critical thinking.

Critical Reasoning For Beginners is currently available on the University of Oxford website in both audio and video formats, and also on iTunesaU.

You can find it listed in our collection of Free Online Philosophy Courses, alongside classes like General Philosophy, Death, and The Art of Living, all part of our collection of 825 Free Online Courses.

Ilia Blinderman is a Montreal-based culture and science writer. Follow him at @iliablinderman, or read more of his writing at the Huffington Post.

Enhanced by Zemanta

Thursday, February 27, 2014

“… we cannot study everyone everywhere doing everything” (Punch, 2005:187)

Exeter Library. Exeter, New Hampshire
How Can You Research It All? (Wikipedia)
by Beth Singler, socphd:

Beth Singler is a PhD student at the University of Cambridge specializing in the social anthropological study of New Religious Movements online. 

 Combining traditional fieldwork with digital ethnography, Beth explores the new definitions of self that multiply on the Internet. 

Her PhD is on the Indigo Children, an idea in the New Age Movement, but she has also written about Wiccans, Jedi, Scientologists, pop-culture religions and various online subcultures. 

She has her own blog at and you can follow her on Twitter via @BVLSingler.

“… we cannot study everyone everywhere doing everything” (Punch, 2005:187). I think I remember breathing out an actual sigh of relief when I first read these words in Punch’s Introduction to Social Research (2nd Ed.).

Finally, there in black and white on the page, the permission not to do EVERYTHING, be EVERYWHERE, or to capture it ALL! With my PhD this has been a lesson I have had to learn, and learn quickly.

My thesis looks at an idea from what is still broadly known as the New Age Movement by academics (but not so much anymore by insiders, but there you go).

The Indigo Children are thought to be a generation of special, spiritually evolved individuals here to change the world according to New Age narratives.

Even though I study New Religious Movements, the Indigo Children do not form a church, they don’t have recognisable and repeated rituals. They don’t wear particular clerical outfits. They aren’t formed into associations with established hierarchies or logos.

What they do do is call themselves Indigo Children (or Crystal, or Rainbow, or Blue Ray - there are many versions but I’ll stick to Indigo for now for clarity) and talk about being Indigo online … a lot.

A Google search done just five minutes ago reveals 809,000 results for the words “Indigo Children”. The first year of my PhD was about just getting to grips with the multitude of sources of information on this subject.

There are web pages by groups and individuals, there are forum boards, there are blogs, there are Facebook groups and pages, there are Twitter hashtags, there are Instagram pictures, Youtube videos, online archives from magazines and newspapers, online tests to see if YOU are Indigo, Meetup groups, tumblrs, memes, petitions, questions, answers, seekers and experts …

My first year was also spent writing a very, very speculative document called a ‘registration exercise’: a sample chapter, an outline, a bibliography, but most importantly, a methodology.

This is to show to internal examiners that I know what I am doing and that I have a plan for the next two years of my research and a methodology that really stands up to scrutiny. Almost a year of fieldwork later and I think I could throw most of that methodology out of the window.

For a start, I would now say that I was back then trying to work from within a positivistic, scientific framework that I adopted out of an unconscious desire for legitimacy.

‘Let’s gets some numbers, some facts, some real HARD data’ says the internal wannabe scientist while the social-anthropologist mumbles about acculturation and socialization through participant observation.

So I ended up with a methodology where I said I would look at X forum everyday and take Y number of screengrabs and repeat until I had REAL data. Well, the multiplication of X by Y gave far too much data … and that was just one source.

All in all there was just too much. So I rethought my approach. Would I capture everything? Probably not … no, definitely not. It was just not possible. But I could approach the subject much like the individual seeker does.

In my interviews with Indigos I asked them about how they had come upon the idea of the Indigo Children and where they had looked for more information.

They described stumbling upon it, or being told about it by someone who thought they might be one. And then they wandered through the wilds of the internet reading some sources, missing others, meeting some Indigos and chatting to them, missing others. They had a seeker’s methodology that didn’t necessarily tell them everything but told them enough. So I picked up this approach and followed what was interesting rather than what was comprehensive.

My supervisor talks about fieldwork as a form of apprenticeship and had I listened more closely I might have got to the same conclusion earlier … through my fieldwork I feel like I have been through an apprenticeship in being Indigo (am I one? I remain neutral but open-minded).

But more than that, I have been through an apprenticeship in doing academic research, which is really the aim of the PhD after all. And I really feel that in doing this apprenticeship I am closer to stopping apologising for not being a ‘real’ scientist.
Enhanced by Zemanta

Wednesday, February 26, 2014

The Toddler’s Guide to Doing a PhD

It is done!
Thesis done! (Photo credit: Djinn76)
This post was written by Jonathan Downie, a PhD student, conference interpreter, public speaker and translator based in Edinburgh, Scotland. 

He co-edits LifeinLINCS the unofficial blog of the Department of Languages and Intercultural Studies at Heriot-Watt University. 

He is married with two children. His newest blog Rock Your Talk aims to help people keep on improving in their public speaking.

The last time I posted, I mentioned in passing that I am the proud dad of a toddler (and, by the time this goes out, a new baby too!).

As any parent will tell you, you learn as much from your children as they learn from you.

It just so happens that in the past few months my son has taught me a lot about doing a PhD. He is exploring the world and learning to walk. I just wish I had learned it all sooner! Here is my shortlist of essential PhD skills you can learn from toddlers.

Learn from everyone and everything

When was the last time you paused on your way somewhere to stroke a wall, explore the feeling of a hedge or touch a tree? For my son, the answer is, almost every time you leave the house.

At the moment, no walk to the supermarket is complete without a stop somewhere on route to look at or touch something interesting.

It was a while into my PhD before I realised that this kind of wide-ranging curiosity is a good practice for researchers too.

Kristin Luker, in her book, “Salsa Dancing into the Social Sciences”, suggests that, when you are doing your literature review, it helps to allow yourself to wander a bit. She calls this “following your id”.

Her point is that if we are going to do cutting edge, boundary-pushing, interdisciplinary research, we can’t be too restricted on where and what we learn.

The more we restrict ourselves to one sub-field, one set of journals or one approach; the less scope we give ourselves for accidentally brilliant discoveries.

Get used to falling

I am yet to meet a parent who could honestly say that their child learned to walk without occasional (or not so occasional) falls. Falls also come with bruises, shock and, in the case of my son recently, a cut tongue.

The strange thing is that, the more a toddler falls, the more they learn to fall properly. They go from falling any old way to purposefully making sure they fall on the most padded part of their anatomy: the bottom.

Eventually, after enough impacts, they learn to walk.

The link to doing a PhD is obvious. On your way to becoming a fully-fledged, hooded academic, you will occasionally (or more than occasionally) make mistakes, have lousy ideas or just plain mess up. Good for you!

Messing up or falling intelligently is an incredibly effective way of learning. Sometimes I wonder if it’s the only way. So how do you fall intelligently?

In my experience, the first and hardest step is to never take falls personally. One lousy idea doesn’t make you a bad researcher and one failed experiment doesn’t make your PhD a write-off. The more you see the odd fall as a natural part of learning, the easier they are to accept.

Once you accept falls, you can look back and examine what you can learn from them. Looking for recurring patterns in your failures doesn’t just give you useful info for your PhD; it might even be the basis for a paper.

Cry for help when you need it

If there is one sound parents get used to hearing, it is crying. My son is actually a very contented little boy but even he occasionally gets frustrated with a toy he can’t get to, a brick that won’t balance or some other little issue. His immediate response is to call over his mummy or daddy for help, often with a kind of whiny cry.

The problem is, when we grow up we mistakenly become more reticent to ask for help. We get proud and start to try to become “self-made people”. Worse still, we can feel like asking for help is a sign of weakness.

If you are doing a PhD, that kind of attitude can be crippling. Read any PhD forum and you will find countless stories of students who have spent months trying to fix a research design, understand a theory or apply a method but are no further forward than when they started.

It should never happen. Almost all of my “light bulb” moments, when I have really seen sudden, massive progress in my PhD, have come when I have asked someone for help. Sure, the answers haven’t always been comfortable but they have always meant some kind of progress.

Celebrate your successes

As I was writing this, I asked my son “where are the lights?” He instantly pointed right above his head to the ceiling lights. Of course, his mummy and I congratulated him for getting it right and then, to everyone’s delight, he gave himself a big clap.

When was the last time you let yourself celebrate what you have achieved? Sometimes even grown-ups need to give themselves a clap.

Enjoy the journey

By far the greatest thing my son has taught me is that you can find something to laugh at and someone to smile at every day. Even on my worst PhD day, when the data looks nasty and the theories make no sense, I try to mirror his attitude.

After all, why spend several years of your life looking like a depressed bloodhound? Someday the whole PhD process will be over. Until then, I want to make sure I enjoy the ride.

Thanks Jonathan – I hope fatherhood the second time around is treating you well! What have you learned from somewhere else – or someone else – which has helped you with your PhD journey?
Enhanced by Zemanta

Tuesday, February 25, 2014

BOOK REVIEW: Methodological Thinking - Basic Principles of Social Research Design


Great quotes from the book …

A book on research methods should begin and end with the importance of critical thinking: everything between the beginning and the end should be about critical thinking.

The elements of social research methods are no more and no less important than the consequences of thinking about how to gather information about to gather information about human social life in ways that will lead to the highest quality information possible.  
Critical thinking is thinking about thinking:  It is analyzing and evaluating what you think and why you think it. pg 7

All data can be categorized in terms of content, origin and form.  These dimensions are independentpg 17

While social research requires data, which are traces of the physical world, everything of interest to researchers- social class, ethnicity, inequality, narrative, identity, deviance, crime, migrations and so on - are concepts, which are abstractions that do not have a physical existence.

All social research from all perspectives is about the relationships between the physical world that can be captured through the sense (data) and the abstract world of meaning (concepts). pg 65

What we like about it most - why it is useful

Have you ever read really dense material and said to yourself, "Wow, this person really knows their stuff?" That is the way I react to Donaileen Loseke's book: Methodological Thinking: Basic Principles of Social Research Design. Sage Publishing (2013).

This book is not a first read, or even a second read along the methodological pathway towards building the design for your doctoral dissertation or thesis, but if you are studying issues in a social science sooner rather than later you should have it in your bookshelf, because it will help you sort out the subtleties that make the difference between good and great design.

Discussed here are issues of:
  1. Positivist, interpretive, and critical perspectives: and their assumptions on social life, social research and about researchers (pgs 21-26, Chapter 2).
  2. Critical analysis of research questions from several angles (pgs 38-47, Chapter 3).
  3. Conceptualization of measurement and how to operationalize them (pgs 66-75, Chapter 5).
  4. The problems of meaning, multidimensionality, interconnectivity and measurement imprecision (pgs 77-78, Chapter 5).
  5. An exhaustive discussion of the variety of data generation techniques (pgs 82-97, chapter 6).
  6. It is wrapped up with a thorough summary of issues of concern in both writing and evaluating social research design. (pgs 114-125, Chapter 8).

From the back of the Book 

Methodological thinking: Basic Principles of Social Research Design focuses on the underlying logic of the social research and encourages students to understand research methods as a way of thinking. The book provides an overview of the basic principles of social research, including the foundations of research (data, concepts, theory), the characteristics of research questions, the importance of literature reviews, measurement (conceptualization and operationalization), data generation techniques (experiments, surveys, interviews, observation, document analysis) and sampling. The text is organized to help students become good consumers and producers of research by developing skills to design small-scale research projects and evaluate research done by others. The author highlights the relationship among various components of research; she also explains that it is not possible to argue that one form of research is better that any other and that good researchers understand the differences among - and appreciate the capabilities of - different tools.

Key Features
  • Takes an interdisciplinary approach, with examples in criminology/criminal justice, sociology, political science/international relations, and social work.
  • Offers a balanced account of theoretical perspectives, providing students with an unbiased presentation.
  • Minimizes technical details of social research design to emphasize logic and the general principles.
Visit to explore the open-access Student Study Site, which features the full versions of the journal articles that are referenced throughout the book

About the author

Donileen Loseke - University of South Florida

Donileen R. Loseke received her bachelor’s in psychology and master’s in behavioral science from California State University Dominguez Hills, and a Ph.D. in sociology from the University of California, Santa Barbara. She currently is a professor of Sociology at the University of South Florida.

Her books include The Battered Woman and Shelters (1992, New York Press), which won the 1994 Charles Horton Cooley Award from the Society for the Study of Symbolic Interaction, and Thinking About Social Problems: An Introduction to Constructionist Perspectives, 2e (2003, Aldine deGruyter), and Current Controversies on Family Violence, 2nd edition, edited with Richard Gelles and Mary Cavanaugh (2005, SAGE).

Numerous journal articles and book chapters report the findings of her empirical research projects that have been on a variety of topics (including evaluation research, social problems, criminal justice, social service provision, occupations, emotion, identity, and narrative), and have used a variety of data collection techniques (including field experiment, written survey, in-depth interview, ethnography, and document analysis).

She has been the editor of the Journal of Contemporary Ethnography and an Advisory Editor for Social Problems. Currently she is an editorial board member ofSocial Psychology Quarterly, an Advisory Editor for The Sociological Quarterly, and an Associate Editor of Symbolic Interaction and Journal of Contemporary Ethnography.

Monday, February 24, 2014

It's Time to Expel Religious Extremism From Schools

by Cathy Byrne

Some Victorian principals have taken the decision to axe religious instruction (RI) from their schools.

Many believe this move is long-overdue and should be replicated nationwide.

Over the past few years, media reports of extremist teaching or proselytising include: a NSW RI instructor claiming to “cure” homosexuals; children in Queensland RI being taught that humans and dinosaurs lived together; and Victorian RI aimed at “making disciples” because “without Jesus, our students are lost".

My research has highlighted the divisive implications of RI curriculums that are racist, sexist, anti-science, age-inappropriate or somehow objectionable - even to church-going Christians.

Little wonder that some educators are finally coming to terms with their obligation to act - in the interests of Australia’s children; in the interests of education.

Last month, ex-British prime minister Tony Blair noted that religious extremism is “not innate. It is taught … sometimes in the formal education system”. If that is true, then skills to counter religious extremism can also be taught.

Religious extremists reject the idea of human equity. They prefer their religious worldview to democratic institutions, values and processes, and think one religion, theirs, is the best and only framework for society.

Many RI programs in Australia are evangelical and biblically literal. These programs position a narrow, extremist view of Christianity as the superior way to live and believe.

Marion Maddox’s new book, Taking God to School, highlights the potential for RI programs to become part of a wider Pentecostal quest “to create a totalitarian fundamentalist Christian society in Australia” where schools are “training ground(s) for the army of Jesus”.

Most Australians assume we have a secular education system; one where religious extremism does not affect our children. This is naïve.

Extremism can emerge from religious radicalisation or scriptural literalism in many contexts. It is not limited to the madrassas of Afghanistan or Indonesia, but is found in schools in suburban Sydney, Melbourne and Brisbane.

My book, Religion in Secular Education, documents how Australia has a policy blind spot regarding RI in state schools. No state education agency effectively oversees what is taught, or by whom. Teachers are not required to be present in RI classes in most schools in Australia.

Instead, RI volunteers are vetted by their own religious organisations. They usually have no formal teacher training. This policy mechanism creates an accountability loophole that enables extremists to target young children.

Adding a volunteer-led ‘ethics’ option, where the providers promise to “never advocate for the removal of RI”, legitimises the presence and power of the extremists.

Many parents are dissatisfied with current RI policies and also with the lack of response to their concerns from Education Departments.

Government agencies do not deal with complaints about inappropriate teaching, lack of alternatives or discrimination against those who opt out. In NSW policy, complaints are directed back to the RI provider.

Media reports from Queensland, New South Wales and Victoria show how state education agencies are not equipped to deal with the policy challenge. Alarm bells should be ringing all over the country.

But assumptions that children learn harmless stories, about “Jesus”, “forgiveness”, and the Good Samaritan, appear to assuage any concerns.

Politicians, teacher union representatives and parents appear to have been lulled into a 1950s response: “it can’t do any harm”. Meanwhile, extremist religious teaching and preaching, in segregated settings, divides multifaith and no-faith communities.

We should not underestimate the damage that can come from religious division and indoctrination. In Australia, recent national curriculum debates, court cases (federal and state) and government programs that finance Christian evangelism (chaplaincy and state-funded Christian RI for example) do not augur well.

Comments by Kevin Donnelly, an appointed reviewer of the national curriculum, suggest that Christianity’s privileges in education should continue.

Donnelly’s Education Standards Institute does not want “Christianity … treated as one religion among many, alongside Buddhism, Confucianism and Islam".

Defending privilege paves the way for the extremists. Perhaps Australia just isn’t ready to recognise its own, home-grown religious extremism. Or are the educators waking?

Embedded in Australian government and media reports about security and countering religious radicalism is the restrictive idea that extremism emerges in “Muslim communities” with disaffected youth. The implication is that religious extremism is only associated with people of “Middle Eastern appearance”.

Soporific denial is easy. Deep self-examination is more demanding. Whether we recognise it or not, whether we develop policies to address it or not, Christian religious extremism can be a security risk, a risk to the nature of our pluralist democracy and our hard-won liberal freedoms.

Aggressive, highly funded and secretive, the incursion of extreme religious evangelism in Australian schools - public, private and “Christian independent” - should give us pause for thought.

For example, the Victorian Department of Education recently found the children’s evangelical organisation, OAC Ministries, operating “outside departmental policy”. It was not authorised to be in the schools.

OAC is an international organisation dedicated to “proclaiming the Gospel of Jesus”, especially to “those outside the church”.

In 2013 I was approached by parents who were disturbed OAC ministries had removed children from school grounds for religious programs, claiming parental consent under a “blanket excursion permission form”.

Some parents, and even principals, were unaware of the nature of these excursions and would not have provided informed consent. It was a serious breach of child security.

Australian society, and the wider world, is no longer focused on a singular, Christian world view. It’s time to expel unprofessional, segregated and unaccountable RI in state schools. The RI time-slot could be better spent.

To adequately equip our children, we ought to provide them with a comprehensive understanding of different religions and non-religious world views and ethical systems.

We ought to teach them how to navigate the real world - which is diverse, religious and non-religious - and how to identify and be careful of extremist views of any kind. Religious extremism can be a dangerous thing, no matter which way it is pointing.

Cathy Byrne is an advisor to ACARA on religion and on the curriculum capabilities of intercultural understanding and ethical behaviours.
The Conversation

This article was originally published on The Conversation. Read the original article.
Enhanced by Zemanta

BOOK REVIEW: "The Question of Conscience: Higher Education and Personal Responsibility" by David Watson

questionconscienceby Ignas Kalpokas, Impact of Social Sciences:

Does a university education hold any value? 

How do universities determine what skills are relevant in today’s ever-changing world when information could become outdated even before students graduate? 

These are some of the questions and problems that David Watson sets out to explore in this book.

Ignas Kalpokas finds this a timely work that clearly dissects the current condition of HE and provides a rewarding read for those actively involved in the sector.

This piece originally appeared on LSE Review of Books and is reposted with permission.

Few people would disagree that the Higher Education (HE) sector is at a crossroads. Among the many challenges facing it are those posed by technology, the labour market, funding models, and sometimes even the lack of a clearly defined purpose.

David Watson’s The Question of Conscience is an ambitious albeit very concise study of the challenges and the possible ways to address them.

It is also a study of the university from within, written by a person who has spent many years in the trade of running a university - something that primarily historical, sociological or other accounts of HE cannot offer.

And yet, despite the author being an insider, this is also a very self-conscious and often even self-critical account of the university.

Thus, besides being a treasure trove of information about how the HE sector works and what its moral, social, and political underpinnings are, the book can also be read from a methodological perspective: as an example of thinking which is both inside and outside, both very intimate and simultaneously conveyed as if through an outsider’s gaze.

The book runs to seven chapters, and opens with a historical overview of the university’s development as an institution but also as a phenomenon because the university was never just an educational institution - it has always had an added value and aspiration.

How these additional connotations changed throughout the years is an interesting topic in itself but it also has an additional purpose: it shows the adaptability of the university.

Whenever this institution has been at a crossroads, it has found a way to keep its presence and relevance. This should, the argument goes, serve as an inspiration for the modern HE sector: whatever the challenges, there is always a way of overcoming them.

Even more significantly, the book also deals with the various theories of HE, each of which prescribes its own purposes to the university, and analyses them against the current trends in the HE sector. The picture that emerges is a very paradoxical one indeed.

The university has long been seen as an institution which moulds good individuals. The model of a ‘good’ individual used to be known in advance and usually defined in religious (although sometimes also in civic) terms and universities were to uphold such uniformity.

However, this practice is difficult to imagine in an increasingly multicultural and multi-religious HE, which appears to be devoid of ultimate ideals of the good.

Also, what about preparation for public life? This could be one of the functions of the university but it also opens a Pandora’s Box of problems, especially those regarding the relation with power, both political and economic.

Indeed, the question of who sets the norms and how autonomous the university can be become topical once HE immerses itself in deliberate moulding of any kind.

Alternatively, the university could be an institution which fosters certain personal qualities and attributes that are necessary for future life and career.

Studying at a university is then seen as a rite of passage, a final stepping stone on the path to adulthood, offering a mixture of essential knowledge, independent life, and socialisation. And yet, ample empirical evidence calls such views into question.

In a similar vein, university has been, and still is, seen as a means to acquire skills and knowledge thus serving as a gateway to a chosen profession.

This view could have been easily applicable when HE institutions could initiate young people into a stable canon of knowledge and values of a certain professional practice.

Conversely, the modern university struggles with the questions of how to determine what skills are relevant in today’s ever-changing world, how to keep up when information could become outdated even before the students graduate, and how to balance market demands, student demands, and the demands of professional bodies or employers.

Questions about the value of university education also cannot be avoided. Therefore, the author sets out to enquire whether a degree is really an advantage and how such advantage could be quantifiable or established with certainty.

A closely related dilemma relates to the university’s role in social mobility in the age of mass higher education: is the university at the forefront of levelling opportunities and allowing people to strive for more than could otherwise be possible or is it really entrenching the present divisions and inhibiting mobility, keeping in mind that graduates of some universities are seen as more equal than others.

As always, the picture is unequivocal.

The above is just a few of the problems and dilemmas that are skilfully dealt with in the book. And yet, The Question of Conscience is not only a descriptive but also a normative endeavour; it is not only an analysis but also a manifesto. This becomes evident in the last two chapters of the book.

The penultimate chapter deals with the ‘terms and conditions’ of membership in modern HE, aiming to establish the rules of conduct for all who partake in universities, from students to staff, and addressing many of the uncertainties facing the sector.

The last chapter, meanwhile, aims to construct the author’s own theory of HE as shared responsibility but does so by way of a conceptual pick and mix takeaway combining bits of Arendt with bits of Rawls and many things in-between.

It is this kaleidoscope of ideas and approaches that leaves the reader dazzled and slightly unconvinced by the entire argument, rendering the analytical part of the book significantly more impacting than the normative one.

All in all, The Question of Conscience is a timely work that clearly dissects the current condition of HE and provides a rewarding read for both those actively involved in the sector and those with only a general interest (although novices may not always find the style extremely accessible). As such, it is a highly recommended work. 

Ignas Kalpokas is a PhD student in Politics at the University of Nottingham, working on a dissertation on Baruch Spinoza, Jacques Lacan, and Carl Schmitt. 

He holds his Masters degree in Social and Political Critical Theory and Bachelors degree in Politics from Vytautas Magnus University (Lithuania). He has also worked on various educational projects and initiatives.

Ignas’ research interests lie in the investigation of interrelated concepts of sovereignty, the state, and the political as well as the formation and maintenance of (national) identities. In addition, his research also involves history, literature, and international relations theory. 

His preferred theoretical framework is mostly Continental philosophy. Read more reviews by Ignas.

Creating a Writing Space – #whereiwrite

by , Progressive Geographies:

There is an interesting piece at chroniclevitae about creating a space for writing, and continued at #whereiwrite.

I have a great home study, which is where I do most of my writing - this is a picture of the last but one home study, and with the old PC, at the moment I completed The Birth of Territory in 2012.

Completing the Birth of Territory

In recent years I’ve done more and  more while away, on a laptop in libraries, shared offices, open offices, or flats and hotel rooms.

The biggest problem I have when doing this is the frustration of knowing I could resolve a reference query with a book that is on the other side of the room on the other side of the world … and I do miss having two monitor screens: I make more notes on paper when using a laptop. This is today, as I’m working on Foucault’s Last Decade:


I’m getting better at working this way. I still need it to be fairly tidy though, or at least, as the first picture shows, everything there is needed at that very time. I can’t imagine doing anything creative in a place like this.

Alternatively, you could take your writing on the train, with Amtrak’s writers in residence scheme. Though recently I’ve been doing a lot of writing in this marvellous room in the State Library of Victoria in Melbourne:


Sunday, February 23, 2014

How NOT to Write a PhD Thesis

Gilles Deleuze
Gilles Deleuze (Photo credit: Wikipedia)
by Tara Brabazon, Times Higher Education:

My teaching break between Christmas and the university’s snowy reopening in January followed in the footsteps of Goldilocks and the three bears.

I examined three PhDs: one was too big; one was too small; one was just right.

Put another way, one was as close to a fail as I have ever examined; one passed but required rewriting to strengthen the argument; and the last reminded me why it is such a pleasure to be an academic.

Concurrently, I have been shepherding three of my PhD students through the final two months to submission.

These concluding weeks are an emotional cocktail of exhaustion, frustration, fright and exhilaration. Supervisors correct errors we thought had been removed a year ago. The paragraph that seemed good enough in the first draft now seems to drag down a chapter.

My postgraduates cannot understand why I am so picky. They want to submit and move on with the rest of their lives.

There is a reason why supervisors are pedantic. If we are not, the postgraduates will live with the consequences of “major corrections” for months. The other alternative, besides being awarded the consolation prize of an MPhil, is managing the regret of three wasted years if a doctorate fails.

Every correction, each typographical error, all inaccuracies, ambiguities or erroneous references that we find and remove in these crucial final weeks may swing an examiner from major to minor corrections, or from a full re-examination to a rethink of one chapter.

Being a PhD supervisor is stressful. It is a privilege but it is frightening. We know - and individual postgraduates do not - that strange comments are offered in response to even the best theses.

Yes, an examiner graded a magnificent doctorate from one of my postgraduates as “minor corrections” for one typographical error in footnote 104 in the fifth chapter of an otherwise cleanly drafted 100,000 words. It was submitted ten years ago and I still remember it with regret.

Another examiner enjoyed a thesis on “cult” but wondered why there were no references to Madonna, grading it as requiring major corrections so that Madonna references could be inserted throughout the script.

Examiners have entered turf wars about the disciplinary parameters separating history and cultural studies. Often they look for their favourite theorists - generally Pierre Bourdieu or Gilles Deleuze these days - and are saddened to find citations to Michel Foucault and Félix Guattari.

Then there are the “let’s talk about something important - let’s talk about me” examiners. Their first task is to look for themselves in the bibliography, and they are not too interested in the research if there is no reference to their early sorties with Louis Althusser in Economy and Society from the 1970s.

I understand the angst, worry and stress of supervisors, but I have experienced the other side of the doctoral divide.

Examining PhDs is both a pleasure and a curse.

It is a joy to nurture, support and help the academy’s next generation, but it is a dreadful moment when an examiner realises that a script is so below international standards of scholarship that there are three options: straight fail, award an MPhil or hope that the student shows enough spark in the viva voce so that it may be possible to skid through to major corrections and a full re-examination in 18 months.

When confronted by these choices, I am filled with sadness for students and supervisors, but this is matched by anger and even embarrassment. What were the supervisors thinking? Who or what convinced the student that this script was acceptable?

Therefore, to offer insights to postgraduates who may be in the final stages of submission, cursing their supervisors who want another draft and further references, here are my ten tips for failing a PhD. If you want failure, this is your road map to getting there.

1. Submit an incomplete, poorly formatted bibliography

Doctoral students need to be told that most examiners start marking from the back of the script. Just as cooks are judged by their ingredients and implements, we judge doctoral students by the calibre of their sources.

The moment examiners see incomplete references or find that key theorists in the topic are absent, they worry. This concern intensifies when in-text citations with no match in the bibliography are located.

If examiners find ten errors, then students are required to perform minor corrections. If there are 20 anomalies, the doctorate will need major corrections. Any referencing issues over that number and examiners question the students’ academic abilities.

If the most basic academic protocols are not in place, the credibility of a script wavers. A bibliography is not just a bibliography: it is a canary in the doctoral mine.

2. Use phrases such as “some academics” or “all the literature” without mitigating statements or references

Generalisations infuriate me in first-year papers, but they are understandable. A 19-year-old student who states that “all women think that Katie Price is a great role model” is making a ridiculous point, but when the primary reading fodder is Heat magazine, the link between Jordan’s plastic surgery and empowered women seems causal. In a PhD, generalisations send me off for a long walk to Beachy Head.

The best doctorates are small. They are tightly constituted and justify students’ choice of one community of scholars over others while demonstrating that they have read enough to make the decision on academic rather than time-management grounds.

Invariably there is a link between a thin bibliography and a high number of generalisations. If a student has not read widely, then the scholars they have referenced become far more important and representative than they actually are.

I make my postgraduates pay for such statements. If they offer a generalisation such as “scholars of the online environment argue that democracy follows participation”, I demand that they find at least 30 separate references to verify their claim. They soon stop making generalisations.

Among my doctoral students, these demands have been nicknamed “Kent footnotes” after one of my great (post-) postgraduates, Mike Kent (now Dr Kent). He relished compiling these enormous footnotes, confirming the evidential base for his arguments.

As he would be the first to admit, it was slightly obsessive behaviour, but it certainly confirmed the scale of his reading. In my current supervisory processes, students are punished for generalisations by being forced to assemble a “Kent footnote”.

3. Write an abstract without a sentence starting “my original contribution to knowledge is …”

The way to relax an examiner is to feature a sentence in the first paragraph of a PhD abstract that begins: “My original contribution to knowledge is …”.

If students cannot compress their argument and research findings into a single statement, then it can signify flabbiness in their method, theory or structure.

It is an awful moment for examiners when they - desperately - try to find an original contribution to knowledge through a shapeless methods chapter or loose literature review. If examiners cannot pinpoint the original contribution, they have no choice but to award the script an MPhil.

The key is to make it easy for examiners. In the second sentence of the abstract, ensure that an original contribution is nailed to the page. Then we can relax and look for the scaffolding and verification of this statement.

I once supervised a student investigating a very small area of “queer” theory. It is a specialist field, well worked over by outstanding researchers. I remained concerned throughout the candidature that there was too much restatement of other academics’ work. The scholarship is of high quality and does not leave much space for new interpretations.

Finally, we located a clear section in one chapter that was original. He signalled it in the abstract. He highlighted it in the introduction. He stressed the importance of this insight in the chapter itself and restated it in the conclusion.

Needless to say, every examiner noted the original contribution to knowledge that had been highlighted for them, based on a careful and methodical understanding of the field. He passed without corrections.

4. Fill the bibliography with references to blogs, online journalism and textbooks

This is a new problem I have seen in doctorates over the past six months. Throughout the noughties, online sources were used in PhDs. However, the first cycle of PhD candidates who have studied in the web 2.0 environment are submitting their doctorates this year.

The impact on the theses I have examined recently is clear to see. Students do not differentiate between refereed and non-refereed or primary and secondary sources. The Google Effect - the creation of a culture of equivalence between blogs and academic articles - is in full force.

When questioned in an oral examination, the candidates do not display that they have the capacity to differentiate between the calibre and quality of references.

This bibliographical flattening and reduction in quality sources unexpectedly affects candidates’ writing styles. I am not drawing a causal link here: major research would need to be undertaken to probe this relationship. But because the students are not reading difficult scholarship, they are unaware of the specificities of academic writing.

The doctorates are pitched too low, filled with informalities, conversational language, generalisations, opinion and unreflexive leaps between their personal “journeys” (yes, it is like an episode of The X Factor) and research protocols.

I asked one of these postgraduates in their oral examination to offer a defence of their informal writing style, hoping that the student would pull out a passable justification through the “Aca-Fan”, disintermediation, participatory culture or organic intellectual arguments. Instead, the student replied: “I am proud of how the thesis is written. It is important to write how we speak.”

Actually, no. A PhD must be written to ensure that it can be examined within the regulations of a specific university and in keeping with international standards of doctoral education. A doctorate may be described in many ways, but it has no connection with everyday modes of communication.

5. Use discourse, ideology, signifier, signified, interpellation, postmodernism, structuralism, post-structuralism or deconstruction without reading the complete works of Foucault, Althusser, Saussure, Baudrillard or Derrida

How to upset an examiner in under 60 seconds: throw basic semiotic phrases into a sentence as if they are punctuation. Often this problem emerges in theses where “semiotics” is cited as a/the method.

When a student uses words such as “discourse” and “ideology” as if they were neutral nouns, it is often a signal for the start of a pantomime of naivety throughout the script.

Instead of an “analysis”, postgraduates describe their work as “deconstruction”. It is not deconstruction. They describe their approach as “structuralist”. It is not structuralist. Simply because they study structures does not mean it is structuralist. Conversely, simply because they do not study structures does not mean it is poststructuralist.

The number of students who fling names around as if they are fashion labels (“Dior”, “Derrida”, “Givenchy”, “Gramsci”) is becoming a problem. I also feel sorry for the students who are attempting a deep engagement with these theorists.

I am working with a postgraduate at the moment who has spent three months mapping Michel Foucault’s Archaeology of Knowledge over media-policy theories of self-regulation. It has been frustrating and tough, creating - at this stage - only six pages of work from her efforts. Every week, I see the perspiration on the page and the strain in the footnotes.

If a student is not prepared to undertake this scale of effort, they must edit the thesis and remove all these words. They leave themselves vulnerable to an examiner who knows their ideological state apparatuses from their repressive state apparatuses.

6. Assume something you are doing is new because you have not read enough to know that an academic wrote a book on it 20 years ago

Again, this is another new problem I have seen in the past couple of years. Lazy students, who may be more kindly described as “inexperienced researchers”, state that they have invented the wheel because they have not looked under their car to see the rolling objects under it.

After minimal reading, it is easy to find original contributions to knowledge in every idea that emerges from the jarring effect of a bitter espresso.

More frequently, my problem as a supervisor has been the incredibly hardworking students who read so much that they cannot control all the scholarly balls they have thrown into the air.

I supervise an inspirational scholar who is trying to map Zygmunt Bauman’s “liquid” research over neoconservative theory.

This is difficult research, particularly since she is also trying to punctuate this study with Stan Aronowitz’s investigations of post-work and Henry Giroux’s research into working-class education.

For such students, supervisors have to prune the students’ arguments to ensure that all the branches are necessary and rooted in their original contributions to knowledge.

The over-readers present their own challenges. For our under-readers, the world is filled with their own brilliance because they do not realise that every single sentence they write has been explored, extended, tested and applied by other scholars in the past.

Intriguingly, these are always the confident students, arriving at the viva voce brimming with pride in their achievements. They are the hardest ones to assess (and help) through an oral exam because they do not know enough to know how little they know.

Helpful handball questions about the most significant theorists in their research area are pointless, because they have invented all the material in this field.

The only way to create an often-debilitating moment of self-awareness is by directly questioning the script: “On p57, you state that the academic literature has not addressed this argument. Yet in 1974, Philippa Philistine published a book and a series of articles on that topic. Why did you decide not to cite that material?”

Invariably, the answer to this question - often after much stuttering and stammering - is that the candidate had not read the analysis. I leave the question hanging at that point.

We could get into why they have not read it, or the consequences of leaving out key theorists. But one moment of glimpsing into the abyss of failure is enough to summon doubt that their “originality” is original.

7. Leave spelling mistakes in the script

Spelling errors among my own PhD students leave me seething. I correct spelling errors. They appear in the next draft. I correct spelling errors. They appear in the next draft.

The night before they bind their theses, I stare at the ceiling, summoning the doctoral gods and praying that they have removed the spelling errors.

Most examiners will accept a few spelling or typographical mistakes, but in a word-processing age, this tolerance is receding.

I know plenty of examiners who gain great pleasure in constructing a table and listing all the typographical and spelling errors in a script. Occasionally I do it and then I know I need to get out more.

Spelling mistakes horrify students. They render supervisors in need of oxygen. Postgraduates may not fail doctorates because of them, but such errors end any chance of passing quickly and without corrections.

These simple mistakes also create doubt in the examiner’s mind. If superficial errors exist, it may be necessary to drill more deeply into the interpretation, methods or structure chosen to present the findings.

8. Make the topic of the thesis too large

The best PhDs are small. They investigate a circumscribed area, rather than over-egging the originality or expertise.

The most satisfying theses - and they are rare - emerge when students find small gaps in saturated research areas and offer innovative interpretations or new applications of old ideas.

The nightmare PhD for examiners is the candidate who tries to compress a life’s work into 100,000 words.

They take on the history of Marxism, or more commonly these days, feminism. They attempt to distil 100 years of history, theory, dissent and debate into a literature review and end up applying these complex ideas to Beyoncé’s video for Single Ladies.

The best theses not only state their original contribution to knowledge but also confirm in the introduction what they do not address. I know that many supervisors disagree with me on this point.

Nevertheless, the best way to protect candidates and ensure that examiners understand the boundaries and limits of the research is to state what is not being discussed. Students may be asked why they made those determinations, and there must be scholarly and strategic answers to such questions.

The easiest way to trim and hem the ragged edges of a doctorate is historically or geographically. The student can base the work on Belgium, Brazil or the Bahamas, or a particular decade, governmental term or after a significant event such as 11 September 2001.

Another way to contain a project is theoretically, to state there is a focus on Henry Giroux’s model of popular culture and education rather than Henry Jenkins’ configurations of new media and literacy.

Such a decision can be justified through the availability of sources, or the desire to monitor one scholar’s pathway through analogue and digital media.

Examiners will feel more comfortable if they know that students have made considered choices about their area of research and understand the limits of their findings.

9. Write a short, rushed, basic exegesis

An unfair - but occasionally accurate - cliché of practice-led doctorates is that students take three and a half years to make a film, installation or soundscape and spend three and a half weeks writing the exegesis.

Doctoral candidates seem unaware that examiners often read exegeses first and engage with the artefacts after assessing if candidates have read enough in the field.

Indeed, one of my students recommended an order of reading and watching for her examiners, moving between four chapters and films.

The examiner responded in her report - bristling - that she would not be told how to evaluate a thesis: she always read the full exegesis and then decided whether or not to bother seeing the films. My student - thankfully - passed with ease, but this examiner told a truth that few acknowledge.

Most postgraduates I talk with assume that the examiners rush with enthusiasm to the packaged DVD or CD, or that they will not read a word of the doctorate until they have seen the exhibition.

This is the same assumption that inhibits these students in viva voces. They think that they will be able to talk about “art” and “process” for two hours. I have never seen that happen. Instead, the emphasis is placed on the exegesis and how it articulates the artefact.

Postgraduates entering a doctoral programme to make a film or create a sonic installation subject themselves to a time-consuming and difficult process.

If the student neglects the exegesis until the end of the candidature and constructs a rushed document about “how” rather than “why” it was made, there will be problems.

The best students find a way to create “bonsai” exegeses. They prepare perfectly formed engagements with theory, method and scholarship, but in miniature. They note word limits, demonstrate the precise dialogue between the exegesis and artefact, and show through a carefully edited script that they hold knowledge equivalent to the “traditional” doctoral level.

10. Submit a PhD with a short introduction or conclusion

A quick way to move from a good doctoral thesis to one requiring major corrections is to write a short introduction and/or conclusion. It is frustrating for examiners. We are poised to tick the minor corrections box, and then we turn to a one- or two-page conclusion.

After reading thousands of words, students must be able to present effective, convincing conclusions, restating the original contribution to knowledge, the significance of the research, the problems and flaws and further areas of scholarship. Short conclusions are created by tired doctoral students. They run out of words.

Short introductions signify the start of deeper problems: candidates are unaware of the research area or the theoretical framework. In the case of introductions and conclusions in doctoral theses, size does matter.

Hope washes over the start of a PhD candidature, but desperation and fear often mark its conclusion. There are (at least) ten simple indicators that prompt examiners to recommend re-examination, major corrections or - with some dismay - failure. If postgraduates utilise these guidelines, they will be able to make choices and realise the consequences of their decisions.

The lessons of scholarship begin with intellectual generosity to the scholars who precede us. Ironically - although perhaps not - candidatures also conclude there.
Enhanced by Zemanta

Saturday, February 22, 2014

The Myth Behind Public School Failure

Samsung Unveils Galaxy Note 10.1 based Smart S...
Schools In (Photo credit: samsungtomorrow)

Until about 1980, America’s public schoolteachers were iconic everyday heroes painted with a kind of Norman Rockwell patina - generally respected because they helped most kids learn to read, write and successfully join society.

Such teachers made possible at least the idea of a vibrant democracy.

Since then, what a turnaround: we’re now told, relentlessly, that bad-apple schoolteachers have wrecked K-12 education; that their unions keep legions of incompetent educators in classrooms; that part of the solution is more private charter schools; and that teachers as well as entire schools lack accountability, which can best be remedied by more and more standardized “bubble” tests.

What led to such an ignoble fall for teachers and schools? Did public education really become so irreversibly terrible in three decades? Is there so little that’s redeemable in today’s schoolhouses?

The beginning of “reform”

To truly understand how we came to believe our educational system is broken, we need a history lesson.

Rewind to 1980 - when Milton Friedman, the high priest of laissez-faire economics, partnered with PBS to produce a ten-part television series called Free to Choose.

He devoted one episode to the idea of school vouchers, a plan to allow families what amounted to publicly funded scholarships so their children could leave the public schools and attend private ones.

You could make a strong argument that the current campaign against public schools started with that single TV episode.

To make the case for vouchers, free-market conservatives, corporate strategists, and opportunistic politicians looked for any way to build a myth that public schools were failing, that teachers (and of course their unions) were at fault, and that the cure was vouchers and privatization.

Jonathan Kozol, the author and tireless advocate for public schools, called vouchers the “single worst, most dangerous idea to have entered education discourse in my adult life.”

Armed with Friedman’s ideas, President Reagan began calling for vouchers. In 1983, his National Commission on Excellence in Education issued “A Nation At Risk,” a report that declared, “the educational foundations of our society are presently being eroded by a rising tide of mediocrity that threatens our very future as a Nation and a people.”

It also said, “If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war.”

Sandia Infographic

For a document that’s had such lasting impact, “A Nation At Risk” is remarkably free of facts and solid data.

Not so the Sandia Report, a little-known follow-up study commissioned by Admiral James Watkins, Reagan’s secretary of energy; it discovered that the falling test scores which caused such an uproar were really a matter of an expansion in the number of students taking the tests.

In truth, standardized-test scores were going up for every economic and ethnic segment of students - it’s just that, as more and more students began taking these tests over the 20-year period of the study, this more representative sample of America’s youth better reflected the true national average. It wasn’t a teacher problem. It was a statistical misread.

The government never officially released the Sandia Report. It languished in peer-review purgatory until the Journal of Educational Research published it in 1993.

Despite its hyperbole (or perhaps because of it), “A Nation At Risk” became a timely cudgel for the larger privatization movement.

With Reagan and Friedman, the Nobel-Prize-winning economist, preaching that salvation would come once most government services were turned over to private entrepreneurs, the privatizers began proselytizing to get government out of everything from the post office to the public schools.

Corporations recognized privatization as a euphemism for profits. “Our schools are failing” became the slogan for those who wanted public-treasury vouchers to move money into private schools. These cries continue today.

The era of accountability

In 2001, less than a year into the presidency of George W. Bush, the federal government enacted sweeping legislation called “No Child Left Behind.” Supporters described it as a new era of accountability - based on standardized testing.

The act tied federal funding for public schools to student scores on standardized tests. It also guaranteed millions in profits to corporations such as Pearson PLC, the curriculum and testing juggernaut, which made more than $1 billion in 2012 selling textbooks and bubble tests.

In 2008, the economy collapsed. State budgets were eviscerated. Schools were desperate for funding. In 2009, President Obama and his Education Secretary, Arne Duncan, created a program they called “Race to the Top.”

It didn’t replace No Child Left Behind; it did step in with grants to individual states for their public schools. Obama and Duncan put desperate states in competition with each other.

Who got the money was determined by several factors, including which states did the best job of improving the performance of failing schools - which, in practice, frequently means replacing public schools with for-profit charter schools - and by a measure of school success based on students’ standardized-test scores that allegedly measured “progress.”

Since 2001 and No Child Left Behind, the focus of education policy makers and corporate-funded reformers has been to insist on more testing - more ways to quantify and measure the kind of education our children are getting, as well as more ways to purportedly quantify and measure the effectiveness of teachers and schools.

For a dozen or so years, this “accountability movement” was pretty much the only game in town. It used questionable, even draconian, interpretations of standardized-test results to brand schools as failures, close them, and replace them with for-profit charter schools. 


Finally, in early 2012, then-Texas Education Commissioner Robert Scott kindled a revolt of sorts, saying publicly that high-stakes exams are a “perversion.”

His sentiments quickly spread to Texas school boards, whose resolution stating that tests were “strangling education” gained support from more than 875 school districts representing more than 4.4 million Texas public-school students. Similar, if smaller, resistance to testing percolated in other communities nationally.

Then, in January 2013, teachers at Seattle’s Garfield High School announced they would refuse to give their students the Measures of Academic Progress Test - the MAP test.

Despite threats of retaliation by their district, they held steadfast. By May, the district caved, telling its high schools the test was no longer mandatory.

Garfield’s boycott triggered a nationwide backlash to the “reform” that began with Friedman and the privatizers in 1980.

At last, Americans from coast to coast have begun redefining the problem for what it really is: not an education crisis but a manufactured catastrophe, a facet of what Naomi Klein calls “disaster capitalism.”

Look closely - you’ll recognize the formula: Underfund schools. Overcrowd classrooms. Mandate standardized tests sold by private-sector firms that “prove” these schools are failures. Blame teachers and their unions for awful test scores.

In the bargain, weaken those unions, the largest labor organizations remaining in the United States. Push nonunion, profit-oriented charter schools as a solution.

If a Hurricane Katrina or a Great Recession comes along, all the better. Opportunities for plunder increase as schools go deeper into crisis, whether genuine or ginned up. 

The reason for privatization

Chris Hedges, the former New York Times correspondent, appeared on Democracy Now! in 2012 and told host Amy Goodman the federal government spends some $600 billion a year on education - “and the corporations want it. That’s what’s happening.

And that comes through charter schools. It comes through standardized testing. And it comes through breaking teachers’ unions and essentially hiring temp workers, people who have very little skills.”

If you doubt Hedges, at least trust Rupert Murdoch, the media mogul and capitalist extraordinaire whose Amplify corporation already is growing at a 20 percent rate, thanks to its education contracts.

“When it comes to K through 12 education,” Murdoch said in a November 2010 press release, “we see a $500 billion sector in the U.S. alone that is waiting desperately to be transformed by big breakthroughs that extend the reach of great teaching.” Corporate-speak for, “Privatize the public schools. Now, please.”

In a land where the free market has near-religious status, that’s been the answer for a long time. And it’s always been the wrong answer.

The problem with education is not bad teachers making little Johnny into a dolt. It’s about Johnny making big corporations a bundle - at the expense of the well-educated citizenry essential to democracy.

And, of course, it’s about the people and ideas now reclaiming and rejuvenating our public schools and how we all can join the uprising against the faux reformers.

Dean Paton wrote this article for Education Uprising, the Spring 2014 issue of YES! Magazine. Dean is executive editor of YES!
Enhanced by Zemanta

Teaching Resilience: Reflection

DAVOS/SWITZERLAND, 27JAN11 - Daniel Goleman, C...
Daniel Goleman (Photo credit: Wikipedia)
by Kevin D. Washburn, SmartBlog on Education:

“I’m so stupid. I’ll never get this!” The message looped inside Kent’s mind, its echoes blinding him to any way forward.

When his teacher came by, she assumed he was daydreaming and not giving the practice exercises any effort.

A reprimand followed, Kent looked back at the work in front of him, and the audio loop returned. “I’m stupid,” it reverberated. “I’ll never get this.”

In addition to imagination, fostering students’ reflection abilities helps them develop resilience.

We can equip students to think their ways out of defeat and into healthy mind states where learning - deep learning, in fact - can happen.


Reflection comprises the ability to monitor one’s own thinking - metacognition - and to engage strategies - self-direct - that make positive adjustments. It involves three phases.

Phase 1: What am I thinking now?

This seems basic, and yet this first step may be the most elusive. To redirect thinking, which precedes renewed effort, an individual must first recognize his current state of mind. Kent, the student in the opening paragraph, may replay that defeating message without realizing he’s doing so.

Self-awareness is not the mind’s default state. A study conducted a few years back illustrates this. Researchers theorized that young people diagnosed with ADHD might be able to redirect their attention if they are made aware of their distraction.

To test this, researchers set up mirrors near the work areas of several students. When a student became distracted and looked up from his work, the first thing he saw was his distracted self in the mirror. Once they recognized this, most students were able to redirect their attention and complete the assigned task.

This unawareness of one’s current mental state is not limited to individuals with ADHD. Research suggests most of us have blind spots where a mirror - literal or figurative - could help.

Daniel Goleman explains, “… those who focus best are relatively immune to emotional turbulence, more able to stay unflappable in a crisis and to keep on an even keel despite life’s emotional waves.”1

Keeping on an even keel requires recognizing when the boat is being rocked. Awareness precedes course correction.

Strategy for working with students

When I encounter a student who appears frozen by distraction or uncertainty, I ask the student to talk - specifically, I ask the student to think aloud.

At this point, I’m not as interested in his thinking regarding how to complete the task as I am in what he is mentally telling himself. I may prompt, “What are you thinking about yourself as you try to complete this task?”

It is important to guide the student to consciously recognize the “messages” he is telling himself. Once we identify the self-talk that is taking place, we can work on changing the “conversation.”

To some, this may sound more like psychobabble than teaching, but I experienced the influence of such an approach when I was in fourth grade. Not all self-defeating messages are belittling in nature.

Growing up the youngest of four children, I frequently felt like I was in catch-up mode. My siblings were accomplishing things that were beyond my years. Somewhere in middle elementary school, this “race” made me think that I had to be the first one in my class to complete any assignment.

I rushed heedlessly through each task and felt victorious if my work formed the foundation of the completed pile.

After a sprint through some assignment, my teacher called me aside. “What’s going on in your head?” she asked me. I told her I was just trying to get my work done. “No,” she said, “you’re trying to be the first one done every time. Why? What are you thinking when you are working on assignments?”

She persisted, and I voiced my thoughts, which enabled us to redirect my thinking. I became a better student that day, not just in school, but in all of my endeavors. One teacher, taking the time and interest to dig for the source of a problem, can make a significant difference.

Phase 2: What can I tell myself to redirect my energy?

Self-talk is one of the most powerful cognitive tools available. As Jim Afremow explains, “thoughts determine feelings,” and “feelings influence performance.”2

Using self-talk effectively is an act of control. When a student takes control of her mental messages, she is on her way to redirecting her efforts and increasing her learning.

In the famous “marshmallow test,” researchers asked the children who resisted eating the marshmallow right away what they did to withstand the temptation.

Several indicated that they talked to themselves. They told themselves messages like, “You can do this. Try to wait for one more minute.” and, “Make this fun. Imagine what else that thing could be besides a marshmallow.” What an example of using self-talk to distract oneself!

“The mind guides action,” explains Antonis Hatzigeorgiadis. “If we can succeed in regulating our thoughts, then this will help our behavior.”3

Instructive self-talk, the act of “talking” through the details of how to do something successfully, is more effective than self-esteem boosting messages (e.g., “I’m the best!), in part because the brain has difficulty accepting a compliment that doesn’t have an associated accomplishment.

But also because instructive self-talk increases the mindfulness with which a student approaches a challenge.

Strategy for working with students

A struggling student is the most likely to be trapped in a self-defeating cognitive whirlpool. Once the defeating thoughts are identified, a teacher can guide a student to healthier mental messages.

My favorite approach for this phase comes from Robert Brooks: “This strategy you’re using doesn’t seem to be working. Let’s figure out why and how we can change the strategy so that you are successful.”4

This response a) directs focus to the strategy rather than the student (i.e., fixing the strategy rather than the student), b) makes the teacher a partner in analyzing the error and in determining how to change the strategy, and c) communicates the teacher’s belief that the student can be successful.

Once the attention shifts to the strategy, the teacher can ask the student to engage in instructional self-talk, speaking aloud the process of accomplishment related to the task. This gives the teacher immediate formative feedback.

From there, figuring out what’s wrong with the strategy and redirecting the student’s thinking and actions becomes natural.

Phase 3: What went wrong?

Guiding students through the process of self-awareness and redirecting their mental energies creates a powerful learning opportunity. When our brains do not achieve an expected outcome from our efforts, be they cognitive or physical or a combination, we experience a feeling of disappointment.

That feeling indicates that at that moment we are primed for learning, but - and this is critical - only if we are willing to attend to and examine our errors.

That means that when students make errors, when they struggle, we have a great opportunity to spark deep learning, but only if we respond to students’ mistakes effectively and help them analyze errors.

Strategy for working with students

As Brooks’ approach suggests, figuring out why a student’s efforts have not been successful is an important step. While it’s more efficient to tell a student the right way to do something and expect him to do as told, such an approach fosters compliance while avoiding learning.

Guiding a student to figure out what went wrong increases the likelihood of insight - that moment when the student says, “Oh! I get it!”

Error primes the mind for insight, and analyzing the error invites it. Ask the student to think aloud as he revisits his thoughts and actions.

Many times, just doing this enables the student to discover his own mistakes and lingering questions. If not, the teacher can point out where the thinking went astray and redirect the student toward deeper understanding or effective action.

As imagination enables a student to visualize “making that scene really happen” (the scene in which the student experiences success), reflection empowers a look back in order to determine a way forward.

In life and in the classroom, the one doing the thinking is doing the learning. When thinking ceases and self-defeating messages crescendo, we can guide students to healthier states of mind and, in the process, equip them to make such cognitive turns on their own.

Kevin D. Washburn (@kdwashburn) is the executive director of Clerestory Learning, author of instructional-design model Architecture of Learning and instructional-writing program Writer’s Stylus, and co-author of an instructional-reading program used by schools nationwide. 

He is the author of “The Architecture of Learning: Designing Instruction for the Learning Brain” and is a member of the International Mind, Brain and Education Society and the Learning & the Brain Society. Washburn has taught in classrooms from third grade through graduate school.


1. Goleman, D., Focus: The Hidden Driver of Excellence (New York: HarperCollins, 2013) 15.

2. Afremow, J., The Champion’s Mind: How Great Athletes Think, Train, and Thrive (New York: Rodale, Inc., 2013) 38.

3. Hatzigeorgiadis, A., as quoted in Afremow, J., The Champion’s Mind: How Great Athletes Think, Train, and Thrive (New York: Rodale, Inc., 2013) 39-40.

4. Brooks, R., “Mindsets for School Success: Effective Educators and Resilient, Motivated Learners.” (presentation at Learning and the Brain: Using Brain Research to Enhance Cognitive Abilities and Achievement, November 2007).
Enhanced by Zemanta