Ten Things Our Grandparents Got Right #5: They Didn’t Treat Teenagers Like Infants

Picking up from my last post’s ellipsis, I feel I need to address the infantilisation and outright ageism displayed by adults toward teenagers. This rather repugnant reincarnation of genetic determinism (for which there is no good evidence, and against which Stephen J. Gould spent much of his career combatting) is particularly dunderheaded when you take into account the plasticity of the brain, just now beginning to be understood. “Don’t talk for more than ten minutes on any subject”, we were told in teachers’ college, “because the adolescent brain has an attention span that tiny, and there is nothing anyone can do about it”. The contradictory complaints that attention spans are getting smaller, often iterated by the same people, never seemed to present a serious challenge to the accepted wisdom. Surely if they can shrink, they can also grow.

Under the deterministic model, adolescents’ potential is viciously undercut, and a condescending attitude of pandering to existing biases, tastes, knowledge, interests, and capabilities is adopted, with real change or growth is almost completely negated. “Teenagers are lazy and surly because of physiology, or perhaps hormones”, people say, though there is absolutely no record of this being true  either in historical records or even in other cultures existing today. And the list goes on: teenagers are incapable of making rational decisions because of brain chemistry, not because they are systematically denied the opportunity to practice making good decisions on a daily basis. Teenagers are not punctual because of circadian rhythms unique to them, and not because of poor sleep and nutrition habits that are actively encouraged by our society.   Here’s an article that actually suggests that teens’ supposedly biologically based inability to process risk effectively might be the result of sleep deprivation!   Knowing the effects of sleep deprivation on the human psyche (I am an ex-soldier), it surprises me that nobody has made that link earlier.  I have heard more than once the lament that “if we really wanted teenagers to pay attention, we wouldn’t hold classes before ten o’clock”. Recently, this has been challenged, the only question remaining being how such a parochial view could have survived this long. Anyone who travels outside of North America or who has read history (think Alexander the Great, Joan of Arc, Augustus Caesar, Mary Shelley, Louis Braille, et al.) will shake his head at such statements of the inevitability of “the teenage brain” and its limitations. The list of historically significant teenagers is as long as my arm, at least until about the middle of the twentieth century, when they suddenly became incapable of surviving the most basic of situations, such as wearing hats that light up .  This photo is taken from the wonderful Free Range Kids blog, which also has so many first-hand anecdotes of infantilising in America that it sometimes makes me afraid to read it.

Fourteen-year-olds would burst into flames.

For starters, the concept of the teenager as a separate class of individual, or a distinct stage in life, is a very recent coinage – nobody used the term before 1941. Now the invention of the ‘tween’ is pushing it even further: it is only attested to since 1988. The theories as to exactly what purpose the invention of such a repugnant, incapable figure was supposed to serve vary, but John Taylor Gatto, Neil Postman, and Dr Robert Epstein have some suggestions.

Postman, in The Disappearance of Childhood, argues eloquently for the phenomenon of childhood in general being a socially constructed event. He points out that the idea of childhood, a time of life in which one is supposed to be controlled by a sense of shame and protected from such things as knowledge of adult sexuality, was a product of the end of the Middle Ages and the rise of the printed word. A boy of seven years old was, for all practical purposes besides “making love and war”(p.15), capable of every meaningful act in Mediaeval society. He could speak, and do labour, and in a predominantly oral culture, these are all that are needed for maturity and inclusion in the social structure. There is no need or possibility in a Mediaeval culture for the keeping of secrets; privacy was hardly a concept at all, and close quarters and the lack of any need of reading skill made knowledge a general commodity.

But when the written word became the new means to record, keep, and guard the culture’s knowledge base, institutions like educational systems were needed to induct the child into the world of adults. This effectively stretched the time of childhood from seven years to the end of schooling. As Postman points out, before the 16th century, there were “no books on child-rearing, and exceedingly few about women in their role as mothers […] There was no such thing as children’s literature […] , no books on pediatrics. […] Paintings consistently portrayed children as miniature adults […] The language of adults and children was also the same. There are […] no references anywhere to children’s jargon prior to the 17th century, after which they are numerous.” (18). Children did not go to school, because there was nothing to teach them. But now the definition of childhood changed, from one based on linguistic incompetence to one based on reading incompetence. Instead of just becoming an adult by ageing, children had to earn adulthood through education – and the European states invented schooling to accomplish this new process. Childhood, as Postman notes, became “a necessity” (36).

Later, with industrialisation, threats to this newfound idea of childhood emerged. The new urban demand for factory and mine workers supported the “penal” aspects of schooling to break the will of the child and accustom him to the routine labour of factory work. In response, child labour laws were introduced, enshrining the concept of the sacrosanct nature of childhood. Though Postman sees the growth of elementary education after 1840 as evidence of the triumph of the notion of childhood over industrial capitalist concerns, J.T. Gatto sees it somewhat differently.

Gatto, an award-winning teacher who speaks now against institutionalised education, argues that the modern American education system never outgrew its penal origins, and in fact goes further, saying that the system is set up more or less deliberately to bring about the class of uncritical, bored, dissatisfied consumers that is important for the corporate model of capitalism to flourish. Children were being actively groomed by industrial influencers of education systems to become not citizens or human beings, but “human resources”, to be moulded to fit something called a “workplace”, “though for most of American history American children were reared to expect to create their own workplaces.”

The subdivision of childhood into adolescence, and now, pre-adolescence (the “tween” phenomenon) is something that Robert Epstein has written on. Epstein, in his book The Case Against Adolescence , argues from the point that Postman and Gatto leave off, during the industrialisation of America. He sees the creation of the adolescent as a kind of benevolent but destructive side effect of the social reforms that were reacting against the admitted evils of the Industrial Revolution with regards to children’s rights. The creation of institutions such as child labour laws, compulsory education, the juvenile justice system, and the age-specific restrictions of “adult” activities such as driving, drinking alcohol, and smoking, according to Epstein, had the effect of isolating the child’s world from the adults’ almost totally. They are confined to a mostly (by definition) developmentally incomplete peer group, and their dependency is extended by more than a decade before they are required to enter the adult world after school – this despite the fact that their sexual maturity and mental readiness for such a transition are evident from a much earlier age. In a study, Epstein found that teenagers have ten times as many restrictions placed upon their behaviour as normal adults, and twice the number as felons and soldiers! The rise of incidences of such restrictions exactly parallel industrialisation, and jump significantly after World War II.

Epstein picks up an argument from Postman, and suggests that the studies that purport to “show” the biological cause of the supposedly innate surliness and incapacity of teenagers are flawed, in that they show only correlation, not causation. In fact, given the plastic nature of the brain, I myself would expect to find that such correlations are in fact backwards, meaning that the social restrictions on teen behaviour are in fact to blame for the state of their brains. The argument that brain scans “prove” the innate uselessness of teenagers in such areas as risk assessment or impulse control sound to me about as useful as “scanning” the musculature of a teenager who has never lifted weights, and declaring them “unfit” and biologically incapable of ever being an athlete.

In fact, the list of famous “characteristics” of teens proves to be mostly made up. Margaret Mead points out in her studies of adolescents in Samoa that the traditional “storm and stress” of North American teenage development is nowhere to be found in that culture, or any other preliterate culture. There is no term for adolescence in the majority of such societies. Even the list of undesirable teen behaviour in our own society, summarised by Philip Graham as “identity confusion, extreme moodiness and high rates of suicide, violent discord with parents, aggressive behaviour, intellectual and emotional immaturity, risk taking, and sexual promiscuity arising from the raised secretion of sex hormones” has been shown to be common to less than 20% of the age group in question. Hardly a useful list of descriptors, then.

And as to other biased assumptions about teenage behaviour, such as the idea that they are addicted to, and are misusing, technology?  Turns out we ought to have a good look in the mirror here too:  Adults were found in a study to abuse technology at a higher rate than their kids.  

Why are these ideas so pervasive and so tenacious? The original study of adolescence was done in 1904, by G. Stanley Hall. He observed the turmoil on American streets due to industrialisation and massive waves of immigration, unsupported by proper social structures. Concluding from this that all adolescents necessarily exhibited those nasty characteristics mentioned above, he drew on the now-long-debunked theory of biological “recapitulation”, in which the development of the individual mirrored the development of the species. In that model, adolescence “recapitulated” a savage, pre-civilised phase of the development of Homo Sapiens, and it would be expected that such a period would bring with it turmoil. He borrowed from the German Romantic idea of Sturm und Drang and applied it universally to all teens, claiming biology as the cause. Though the field of biology has long since abandoned such theories, the general public has not kept pace.

Of course, I have also found through my years of teaching that what is expected of a person is generally what one will get. As Eliza Doolittle says in Shaw’s Pygmalion,

“You see, really and truly, apart from the things anyone can pick up (the dressing and the proper way of speaking, and so on), the difference between a lady and a flower girl is not how she behaves, but how she’s treated. I shall always be a flower girl to Professor Higgins, because he always treats me as a flower girl, and always will; but I know I can be a lady to you, because you always treat me as a lady, and always will.”

Sadly, we live in a culture where the treatment of young adults is infantilising (do the test here!), demeaning, controlling, and stultifying. Perhaps it’s not entirely adults’ fault, though; as Postman points out, adults are the result of the same process of education that we subject our children to. Whereas once literacy was the dividing line between childhood and adulthood (ideas that were enshrined into the notion of the creation of a free state during the American Revolution), industrialisation also brought with it technologies that made actual familiarity with the written word obsolete. The telegraph, radio, television, and the Internet have taken over from where literacy left off, producing generations of adults who have had unfettered access to information, but no sequential, age-appropriate introduction to discerning its meaning. The very definition of childhood as an idea, not just a biological stage of individual evolution as it is now conceived of, depended on slowly being indoctrinated into greater knowledge through increasingly complex mastery of literacy. Now, who actually reads anything by people like Barack Obama, Stephen Harper, George Bush, or Ronald Reagan? Would they be rewarded if they did? Though our Canadian society has succeeded in producing generations of functionally literate people, we are increasingly reverting to a Mediaeval-style oral culture, in which even people who can read, generally do not, and most of those who do, cannot do so very well. The line between childhood and adulthood is blurred, and brain scans show an adolescent development well into the mid-twenties of North American subjects — “coincidentally”, this is about the same time as many post-secondary students are leaving school.   I would dearly love to see brain scans of people other than the Westernised college students who are the typical subjects of such studies. My intuition is that they would be vastly different at comparable ages.

The assumption that the tastes and interests of a teenager are equally fixed, never to grow, was made clear to me in a textbook on English grammar much in use several years ago, in which every sentence, in order to be palatable to what grammar-textbook publishers assumed teenagers’ interests were, had to have something to do with skateboarding. To me, this attitude is no better ethically speaking, and has just about as much science behind it, as the old idea of the genetic inferiority of slaves. The problem is, with bandwagoning, it’s difficult to get off the wagon, or out of its way. I once had a principal (a fellow with a science background, who ought to have known better) who hawked these unpleasant wares at every staff meeting and P.D. day, much to my annoyance. Years later, after his retirement, he admitted to me that he knew full well all along that it was bunk, but claimed that he found it a useful tool for management. He told me that “we have to work with something” – a foolish imperative that always makes me think of the show Yes Minister, where it was put in syllogistic form:

1. Something must be done.
2. This is something.
3. Therefore, this must be done. 

Teaching is often a surreal experience.

We were presented in Teachers’ College with the interesting model of Howard Gardner’s Multiple Intelligences, and told that we must adjust our teaching techniques to all of them, regardless of their relevance or applicability, because “students can only learn in certain ways”. Every lesson had to touch on as many of the Intelligences as possible, and administrators’ evaluations of teachers would be based on a handy checklist and cursory observation. Imagine trying to incorporate kinaesthetic learning into a lesson on punctuation or grammar! This led to all kinds of silliness , like hopping up and down to simulate semicolons, from which the better teachers miraculously managed to salvage some memorable learning experiences. Since then, Gardner’s theory has come under closer scrutiny, and has been largely debunked, at least in the absolutist terms under which it was adopted in schools. Here’s a quick video outlining the basic flaws in the theory: 

Far from being deterministic learning “styles”, they appear to be mere preferences, and there is no good evidence that pounding a round lesson into one of its square holes does anything to help learning at all. Instead, a good teacher will understand which kinds of tools are applicable and effective, given the nature of the ideas or skills being taught. In other words, according to Professor Daniel Willingham, “While there’s little evidence that matching one’s teaching style to one’s students’ learning styles helps them learn, there’s much stronger evidence that matching one’s teaching style to one’s content is wise.”

Why this obviously silly meme has stuck around for so long, and had such an impact on systems of education is a bit of a mystery, but I have the following observations, which might shed some light: The first half of the equation comes from good intentions, I think: most teachers or educators feel a calling and a social responsibility to their profession. We’re often caring to a fault, and this is an example of the ‘fault’: our predisposition to believe that our job involves finding the “hidden learner” in every student blinds us to the lack of evidence for this particular incarnation of that impulse. A kind of Confirmation Bias, if you will. The idea of Multiple Intelligences (which is a description of ability, not of style), bent slightly to suit our notion of being teachers who care deeply about individual students’ learning, is powerfully appealing. We want to believe in it, because it reinforces pre-existing beliefs that we have brought to our profession, but regardless of how admirable those beliefs might be from an ethical standpoint, if they do not fit the actual facts, they ought to be altered or abandoned. Recently, a study by Daniel B. Klein of George Mason University uncovered what he thought was a type of intellectual bias in Liberal-minded respondents to a survey. When it was pointed out to him that the survey he had provided might be biased, he re-wrote it, and found bias in those of Conservative bent. Then he wrote with some humility and intellectual frankness about his own Confirmation Bias – two attributes that my profession could certainly benefit from.

The second part of the reason this meme is so prevalent in schools, in my opinion, is not because it is correct, nor because it is touted by teacher ed. texts, (Daniel Willingham has looked at the course syllabi of Teacher Ed. Courses and found no evidence of it being ‘officially’ sanctioned) but because of the management models of evaluating teaching ability. When a principal is charged with evaluating the prowess of the teachers in his or her school, and has to report those findings upward to his or her own “managers”, the same silliness happens as when we are evaluating our students: we want to fall back on measurables. It’s a lot easier to carry a clipboard into a teacher evaluation and tick off “yes” or “no” to a question like, “Does the teacher address the students’ learning styles individually?” than to actually make complex judgements about a very fluid and complicated problem like evaluating “good teaching”. So it’s partly a question of efficiency, just as being forced by the requirements of reporting student learning (a vastly complex and mostly abstract concept) in terms of percentile grades results in us asking stupid questions on tests that focus only on measurable, concrete facts, rather than on the rather more important aspects of higher-level thinking. I once asked a question on a test that required students to place in order several events from a novel we were studying in class: something that assessed both their memory of the details of their reading, as well as their understanding of the cause-and-effect relationship between the events. I was forced to abandon the perfectly valid question because it is essentially ungradeable – as soon as one event is out of order, a domino effect takes place and makes it impossible to give a numerical evaluation of how close to being ‘right’ the student was. If anecdotal comments, or even a conversation, were the method of relaying to a student the quality of their understanding, I wouldn’t have lost a potentially valuable assessment tool. A managerial model of reporting quantifiables upward on a chain of command, ultimately to a political bureaucracy, just does not work when dealing with something as complex as human learning.

Partly, though, it’s more insidious than just the self-perpetuating efficiency of a system. Sadly, the two halves of the equation often come together in unsavory ways: when the principal asks “Is the teacher hitting enough of the learning styles in his lessons?” the implied subtext is often, “Is the teacher caring enough toward his students?” This puts a lot of pressure for the meme to become accepted, or at least unquestioned, in teacher circles, at least when administration is present. It’s an unspoken type of ad hominem : between the lines is the question, “Do you really care about children?” I think this is the method of preservation of a lot of silly educational buzzwords, actually: they’re tied to teacher performance reviews. A lot of it is just lip service, as is suggested by the number of teachers who in private conversations will question the meme, but it still has an effect.

I am calling here for a greater intellectual and moral courage on the part of teachers to stand up against policy that is not evidence-based.  Here in Canada, under a government that is apparently actively anti-evidence, this is a tall order.  But we’ve got to start.

Leave a comment

Filed under Uncategorized

Things Our Grandparents Got Right #4: They Didn’t Try to Educate Us for the “Future”

Part Two

 In the last post, I outlined the basic futility of trying to educate our children (“train” them, I suppose would be a better word) for a specific set of skills that would be useful under specific economic circumstances in the future.  I entered the job market, in my mid-twenties, at the very tail end of the 20th century.  My elementary school education, during the 1970s and 80s, could not possibly have prepared me for a job market within the context of a recession that nobody had predicted, and in which the major emphasis was on jobs in fields that had not yet been invented when I was going to school.  On top of that, several years later, the I.T. bubble burst, and all the jobs that were supposedly available to those with a very specific skill set suddenly disappeared.  Nobody really predicted that one, either.  In fact, there is good reason to believe that nobody will ever predict economic futures.

Employers, for their part, have been making it plain for years that it’s less important what specific software skills prospective employees come to them with than what skills in areas like problem solving, creativity, social adaptation, and communication they bring.  Training can always be done (and in my opinion, should be done, at the expense of employers, not the public) in situ for whatever tasks employees will be asked to perform.  The ability to learn quickly and efficiently from that training, by being punctual, polite, open-minded, critical, creative, and proactive is what makes prospective employers drool.  I’m not somebody who believes that the purpose of education is to provide employers with workers, but if you are, then it should matter to you that by all accounts, employers aren’t happy with the quality of worker they’re being given.    It seems that most of them would trade ten technically skilled applicants for a single well-spoken, well-socialised, clear-thinking applicant who can adapt and learn quickly.

 The problem with the future, as I’ve said, is that nobody knows what it will look like.  Its inevitability, though, makes us fill the yawning blankness in front of us with all kinds of hopes and fears – all of which come from our own past experiences, projected upon the future in a kind of collective psychological paroxysm of denial.  The future becomes a canvas upon which all of our present anxieties work themselves out in public.  There are some problems that attend the belief that we actually can educate kids for the future, though, and some of them aren’t as obvious as they should be.

First, there’s the danger of disregarding good ideas based on their novelty in favour of something that is comfortable, but has no good evidence to support its use.  The unconscionable resistance of schools to listen to the increasingly large body of evidence to suggest that grading not only does not assist in the process of learning, but is actively detrimental to it, has been going on far too long.  This is an enormous subject that really deserves a whole post to itself, which I will be glad to provide sometime later.  It is certainly possible to view the past with rose-coloured glasses, and ignore real harms done by practices which have the force of habit, but not of reason.  Often, the desirability of the practice in question is questioned even by its proponents, but urged anyway on the assumption that if it was bad enough for one generation, it ought to be bad enough for the next.  Sometimes this is accompanied by what Alfie Kohn has called the “BGUTI” clause, or “Better Get Used To It”, wherein the future is assumed to be filled with horrible arbitrary uses of power, for which we must train our children to submit.  This does not seem to me to be a noble ambition for our children.

Second, there is the danger of using this “Golden Age” of education disingenuously, as a way to discourage real progress.  Educational reformers, especially those who are advocating changes based on conserving parts of systems of education that have been proven to work well, are accused of “living in the past” and stifling innovation through their delusion.  Again, Alfie Kohn provides us with examples of the kind of “educational reform” sweeping through his nation, the United States, detailing how they are often merely disguised conservative movements, based in ideology rather than facts, and too often designed to line the pockets of those who put them forward.

Third, there is the danger of defining the ‘future’ in terms that are too narrow by far.  Too many educators see the “big picture” of the future of high school students to be the end of their four-year stint with us, and the awarding of the diploma.  After all, “studies have shown” that kids without a high school diploma are more likely to be economically and socially disadvantaged later on, right?  This is often seen to be the legitimate outcome of being deprived of the benefits of the type of education we offer, and not the result of rampant credentialism.   I always try to educate with the long-term goal of producing a thoughtful and mature human being who will continue to think and learn as long as their brains hold out.  And there seems to be good evidence that Alzheimer’s Disease can be mitigated by strong habits of thought, so I’m happy to consider the long term to be roughly “the rest of their natural lives”.  And maybe longer, if they teach their kids healthy habits of mind.

 Fourth, there’s the danger of throwing the baby out with the bathwater.  All of the posts about our grandparents’ “outdated” methods and ideas address this issue.  Certainly, they did a lot of backward, even harmful things in the name of education (many of which I abhor, and will address in later posts), but that does not mean that they had not found certain practices that actually worked.  Their nearly obsessive interest in penmanship, for example, though perhaps emphasised to the point of detriment to other aspects of learning, did have benefits that we miss, now that it’s gone from the curriculum.  Everybody has been through some sort of schooling, and everyone has had bad experiences, bad memories, and bad teaching at one point or another, all of which people insist on telling me about in detail the instant they learn that I am a teacher.  Learning has always been hard work, and ever since Shakespeare wrote about the “whining school-boy, with his satchel /  And shining morning face, creeping like snail /  Unwillingly to school” (As You Like It, II.vii.145-47), we’ve had to bear the brunt of everyone’s residual educational and social angst from high school.  The past, no matter how awkward, stressful, or frustrating, was not all bad, and it is worth preserving the better parts of what our ancestors came up with over many centuries of research and development.  This definition of conservatism in education I am all for.  But how, one asks, can we determine which parts to preserve and which parts to discard?  I would answer that anything that has been demonstrated to be harmful or detrimental in any way to the process of learning ought to be done away with as quickly as possible.  Anything that can be shown to reduce or kill hope outright, or poison students’ innate curiosity and desire to learn, ought to go.  Anything that develops humane perspective, curiosity, and habits of mind that allow learning to be indulged in as a pleasurable (though not effortless) activity for the rest of one’s life ought to be encouraged at all costs.  Encourage flexibility, and discourage rigidity of thought and ideology; otherwise, that great unknown future will wallop our kids when it finally shows up in a form that nobody anticipated.

 Fifth, there’s the concomitant danger of bandwagoning; of jumping onto every new idea or educational movement uncritically and for the sake of novelty itself.  Talk to any teacher who’s been teaching more than a few years, and they’ll tell you some stories about this one.  Our profession is awash in buzz-words, and though the words themselves sometimes show up in different forms, the range of ideas they represent is surprisingly limited.  Often, they’ll come back in roughly ten-year cycles, re-branded and as fresh as a bad penny (to mix a metaphor).  For a period of time in the late 1990s and up until a few years ago, one of the buzz-words you’d hear everywhere, presented as a strange hybrid of Policy, Gospel, and “Best Practice” (the latest euphemism for “toe the line”) by administrators everywhere, was the astonishingly silly phrase, “Brain-Based Learning” (is there an alternative organ that could be substituted?  It’s only a matter of time before “spleen-based learning” is all the rage).  Here’s a quick video detailing the level of skepticism we need to approach this concept with:

All of which brings me to the last point:

Finally, sixth, there’s the danger of treating the future (or your limited understanding of it) as inevitable, based on physiology.  This is an important enough topic that it deserves its own entry.  To be continued . . .

Leave a comment

Filed under Uncategorized

Things Our Grandparents Got Right #4: They didn’t try to educate us for the ‘future’.

  Part One

 This is kind of anti-intuitive.  The very process of educating children seems to rest on the idea of preparing them to meet their future.  The whole concept presupposes that the end of the process will create an educated member of society, many years down the road.  That part is fine:  of course we want to have a purpose in education, and it seems reasonable that it has something to do with kids becoming adults over time, which kind of implies the involvement of the future.  The problem comes when we start to think we know what that future will look like.

 Ever wonder why so many Science-Fiction movies set in the future are either Utopic (rare) or Dystopic (way more common)?  And have you noticed that all the fashions and hairstyles of these movies are just reflections (usually shinier, or slightly more ridiculous) of styles in vogue at the time the movie was produced?  And when the movie is set in a year that we’ve already lived through, how utterly unlike the reality of that time it is?  Further, have you noticed that these films are usually good indicators of the varieties of social angst that were current when they were made?  How many “Alien Invasion” movies from the 1950s mirror Cold-War fears of foreign infiltration and invasion? 

Who knew that those dresses would still be in style 400 years later?

It shouldn’t really be a shock to us that we can’t read the future.  What’s a lot more shocking to me is how often we act as if we can, and how infrequently we learn from being proved wrong.  Dan Gardner, in his book Future Babble, exposes the degree to which relying on experts, against all intuition to the contrary, actually renders us less able to predict and adapt to the future.

Gardner makes reference to studies that have been done over the years to try to verify the accuracy of expert predictions about the future.  This is, of course, a separate question from the amount of knowledge about a certain subject (gained from studying the past) any given expert possesses. The question is, “Does having a lot of knowledge about a particular subject increase your chance of being right when making predictions about the future of the area of study?”  Some of these studies have been conducted by the media (admittedly not very scientifically).  Here’s Gardner: 

“In 1984, The Economist asked sixteen people to make ten-year forecasts of economic growth rates, inflation rates, exchange rates, oil prices, and other staples of economic prognostication.  Four the test subjects were former finance ministers, four were chairmen of multinational companies, four were economics students at Oxford University, and four were, to use the English vernacular, London dustmen.  A decade later, The Economist reviewed the forecasts and discovered they were, on average, awful.  But some were more awful than others:  The dustmen tied the corporate chairmen for first place, while the finance ministers came last.” (p.21)

Other more recent examples have also come from the press:  If anyone remembers the famous accuracy of  Paul the Octopus, a cephalopod who was able to predict the correct outcome of all seven matches AND the final of the German team’s 2010 FIFA World Cup of soccer, they might be amused to hear of other animal ‘predictions’ that put our purported abilities to shame:  Chippy the chimpanzee embarrassed famous American pundits by choosing flashcards indicating political outcomes at a higher rate of accuracy than the experts, two months running.  In the field of meteorology, Wiarton Willie, the groundhog who predicts the onset of springtime every February second in Ontario, claims to be accurate 90% of the time on his personal website (though a larger study puts groundhog predictions in general over the last 40 years at about 39% accurate).  National weather bureaus claim about a 60% accuracy on long-range forecasts, though many think this is too high.  Certain ancient traditions of haruspicy are still being practiced; a pig farmer in North Dakota who examined the spleens of his pigs to predict the weather boasted of an 85% success rate.

None of these, of course, point to any magical powers possessed by animals.  (A better candidate for a claim of that sort is perhaps to be found in the case of the Tsunami of December 2004, in which more than 150, 000 people were killed, but relatively few animals, who anecdotally seemed to know that something was about to happen and fled).  At best, they indicate that when a series of choices is made more or less randomly, the accuracy rate is higher than when experts make them.  This is embarrassing enough, but to find out that one’s chances of being right actually decrease when one’s confidence and expertise increase is downright humbling.

Philip Tetlock, a psychologist at the University of California, conducted the largest experiment on the subject over a number of years after the spectacular failure of anybody to predict the downfall of the Berlin wall in 1989 and the subsequent collapse of the Soviet empire.  He studied 284 experts in politics, economics, and journalism, and compiled 27, 450 predictions about the future.  Conclusion:  the experts would have been beaten by a “dart-throwing chimpanzee”.  Some, however, were a lot worse than others:  these experts would have vastly improved their accuracy if they guessed randomly.  Tetlock discovered that these experts’ backgrounds or education didn’t explain their inaccuracy; instead, it was their mode of thought.  They were particularly uncomfortable with complexity and uncertainty.  They worked from an ideology and were extremely confident that it was correct.  Tetlock called these experts “hedgehogs”, after the fragment of the poem by Archilochus:  “The fox knows many things, but the hedgehog knows one big thing”.  The foxes, on the other hand (the experts who had no preconceived ideology, but worked from data, synthesising multiple sources and self-critically correcting for error as they went), did much better, and did manage to do better than just flipping a coin.   Much has been made recently about studies that appear to show differences in the tendencies of conservatives’ and liberals’ ways of thinking that mirror these broad categories:  Conservatives tend toward hedgehoginess, and liberals to vulpine leanings.

Interestingly, hedgehogs who are more ideologically extreme are even more likely to be wrong, and their accuracy actually declines when they know a lot about their subject, as well as when they predict something over a long period of time.  As Gardner puts it, the lesson is that “if you hear a hedgehog make a long-term prediction, it is almost certainly wrong.” (27)   And, of course, the problem is that we get most of our predictions from hedgehogs.  They are on TV and in the news all the time: they are confident, educated, knowledgeable experts who are willing to say bold, loud, easy-to-understand things about the future.  No media source wants to have foxes on TV; they will tend to want to say things like, “It depends,” or discuss things at length, giving a nuanced opinion.  And, in the end, though they do much better than hedgehogs, foxes are no prophets:  the world is fundamentally complex and unpredictable.  You can beat even a fox at predicting the future by predicting that “nothing will change”.  The things that are predicted are almost always wrong, (remember Y2K?  The paperless office?  The list is huge) and the things that end up happening, such as the collapse of Eastern-Block Communism, the Arab Spring, the housing crisis of 2008, and 9/11, leave pundits scrambling to rationalise all the reasons they hadn’t seen anything coming.

So the hubris of predicting things like what the “economy of the future” will be is really just an arrogance born of fear:  we want to educate our children to face what is now, has always been, and will always be, an uncertain future.  All kinds of educational imperatives have been attempted in the name of just that.  The fact remains that we simply don’t know, and are not able to know, what will drive the economic engine of our children’s future.  If we belong to that section of society that believes that the purpose of education is largely economic, then we are pretty much out of luck.  It simply can’t play that role.

In Ontario, where there is little formal attention in the curriculum given to job-specific skill sets, this is less of a problem than elsewhere.  But we can still get sucked into the “education for the future” meme in other ways.  We often talk about education like it is “for” something, in a kind of pragmatic way.  I can’t disagree; I think so too.  I just think that I don’t know what it’s for.  I’ve had ex-students come visit me ten or more years after I taught them.  They always share their memories of the classes they had with me, and it’s a rare moment when their memories match mine.  They’ll sometimes tell me that something I said in class changed their lives – my response is often unspoken, but goes something like this:   I said that?  Huh.  I don’t remember that.  Sounds profound, though.  I’m glad it helped.  Many times the things they remember weren’t part of any official curriculum.  Just some off-the-cuff remark that stuck with them and meant something eventually.  Sometimes it isn’t even anything you say:  sometimes just the long-term effect of your character on a kid will turn things around for him.  I’m always surprised by what they say meant something to them.  It’s rarely something content-related.  That’s where a little humility goes a long way:  I don’t know what is meaningful to them, or what will become so in the future.  I don’t know what part of my experience and worldview will resonate with them.  At the time, it sometimes seems like none of it is making any impact, but they tell me different, years later.  So I teach what I think is interesting, and hope for the best.

Sometimes we answer the question of “what is education for” in a too-limited manner.  Aristotle thinks of the question like this:  Why do we do anything?  Can we follow the trail of motivation to a source?  Something we do for its own sake, and not as a step to something else?  We’re goal-oriented in the West; it seems like we’re often lost without them.  We go to school, we think, because we want to get into university or college.  Why?  So that we can earn a certificate or degree.  Why?  So that we can use it to get a job.  Why?  To earn money.  Why?  To buy things with.  Why?  (And here’s where the trail usually ends in a capitalist society)  Because we think they will make us happy.  But why do we want to be happy?  For no reason.  Happiness is its own end.  We think, though, in the goal-oriented rat race of the West, that happiness is an ‘end’ in a kind of a final sense:  we think that retirement is the time in your life when all this will eventually pay off.  And so many of us end up waiting until we’re 65 to be happy.  In fact, by that point, many of us are so used to setting goals and postponing happiness that we don’t know what to do with ourselves after we leave our professions.  That’s obviously no way to live your life either.

So the future doesn’t seem to be the way to go when we think about education.  In the next post, I’ll go into why the alternatives, i.e., living in the “golden age” past of education, or else turning education into nothing more than a reinforcement of existing biases, aren’t viable options either.

Leave a comment

Filed under Uncategorized

***WE INTERRUPT THIS BLOG TO BRING YOU NEWS OF A ZOMBIE APOCALYPSE!!***

Hello.

In honour of Hallowe’en, I’d like to spend a moment talking about zombies. 

 What I’m referring to here are horrible, shambling, disjointed things that, though they were put in the ground years ago and really ought to be dead, keep popping back up to eat people’s brains. 

 Yes, the Zombie Idea is hard to get rid of.  We in the field of education see more than our share of them, probably because our realm of expertise is so heavily controlled by people who have little or no background in it.  A Zombie Idea is usually one which is ideologically based; these are particularly tough to eradicate.  Though studies are done to find out the reality behind certain ideas people seem to want to have about education, and the results are often as decisive as a shotgun blast to a decaying head, you can bet that within months or even days, the Zombie Idea will lurch back to life and pester you, forcing you to have to deal with it all over again.

In the field of Law, there’s such thing as precedent.  Once something is accepted as being true, it takes something really extraordinary to rehash the debate from zero.  The Nuremberg Trials, for example, once and for all put into the ground the defence of “Following Orders” and committing atrocities.  Nobody can get away with that crap anymore.  Any lawyer trying to shough off his clients’ guilt by claiming that they were following orders today won’t get his case heard seriously in court.  We know it’s not a reasonable defence.  We’ve been through that, already. 

 I started this blog because I didn’t feel like there was enough of a conversation going on about education.  I definitely am strongly against the curtailing of free speech.  But there comes a time when ideology trumps evidence, and old ideas are brought out, dusted off, and set to work gnawing at our brains for the umpteenth time since whenever.  Merit pay for teachers will separate the wheat from the chaff!  More homework for students will increase their academic success!    Teaching kids about sex will just increase their chances of STDs and pregnancy!  We need to grade students in order to motivate them to do work at school!  And on and on. 

 All of these ideas sound plausible.  “Someone should check to see if that’s true!” is a good response to something that has an air of plausibility.  You would think that we’d hear that response more often.  But there’s news:  we checked.  They don’t work.  They never did work and they never will work, for reasons that are complex, interesting, and fundamental to the process of learning.  They are just, plainly put, wrong.  Smart people, using good testing equipment and procedures, have examined the evidence and found that although they sound good, they are just not true. 

 And that should be that.  From there, we ought to be able to move on and find out why things that sound like they make sense turn out not to be the case.  We might learn something about reality, for instance.  Instead, we have to slog through the same mud over and over again, every time some ideologically-driven wingnut decides to use our profession as a hobby horse.  Education may be a political football, but even footballs eventually make their way down the field. I’m stunned by how the debate went during the last Provincial and Federal elections.  Facts seem to matter little.  Ideology forces many people to ignore them even if they are reported.

I’d like to call here for a much stronger system of parameters for the education debates.  I’d like for the ideas discussed to be based on evidence.  I want policy to be evidence-based, above all.  And I would like a process of precedent to be set up in the public conversation, where we don’t have to explain absolutely everything from scratch, every time.

Sadly, as long as the field of education remains in political hands, and as long as the antiquated hierarchical system within schools is not replaced with a more democratic system wherein educational leaders are elected from among teachers and researchers, this won’t happen.  It’ll just be my ideology versus the zombies’ ideology, swinging back and forth every election. 

Braaaiiiiinnnnnnsss!       Whhyyyy don’t we uuuuuuuusssse theeemmmmm???

Leave a comment

Filed under Uncategorized

Ten Things Our Grandparents Got Right #3: They allowed us to fail

Final Part

Benefits of failure 

All of the factors mentioned in the first three segments of this posting series have contributed to the notion that risk is unacceptable in any form.  Failure, the constant companion of risk, is just as much of a pariah.  But they are both very necessary for healthy development.  David McClelland, of Harvard University Psychology Department, found that setting goals with a high possibility of failure – somewhere between 30 and 50% chance – actually helped highly motivated people to improve their skills.  His work in achievement and motivation earned him the APA’s Award for Distinguished Scientific Contributions.

 Over at Stanford, Dr Carol Dweck suggests that two different attitudes towards the concept of intelligence can have a huge effect on not only learning, but anxiety.  A ‘fixed’ mindset is one that is born of a belief that success and intelligence are innate:  statements like “You’re very bright” accentuate this belief.  Holders of this mindset are upset with the notion of failure, because it so obviously reflects on them as people, on the essential level.  The ‘growth’ mindset is different:  it assumes that success is the result of hard work, and therefore holders of this mentality fear failure much less:  they’ll just keep trying and learning as they go.  Obviously, these are the innovators and high achievers of our times; the ‘fixed’ mindset leads more often to anxiety and paralysis than any kind of growth or success.  You can see the two mindsets laid out in this graphic: 

Michael Jordan once said on the subject:  “I’ve missed more than 9,000 shots in my career, I’ve lost almost 300 games. Twenty-six times I’ve been trusted to take the game-winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed.”  More famous “failures” are highlighted in this short video:

On an even more fundamental level, Gandhi reminds us that “Freedom is not worth having if it does not include the freedom to make mistakes.”  Or, more correctly, freedom does not exist under those circumstances.  And when freedom does not exist, there is no control over one’s future, a circumstance that psychologists point out is a big factor in the increasing levels of anxiety and depression in kids today.  In 2007, a group of 270 child psychologists from around the English-speaking Western world wrote an open letter to the Daily Telegraph suggested that the loss of unstructured play time was behind the “explosion in children’s diagnosable mental health problems”.  This seems to be supported by some research:  overprotective and controlling behaviour by parents might be the mechanism by which the transmission of anxiety from parent to child (well documented elsewhere) is effected.

Risk aversion is rampant in the education system today.  There are dozens of anecdotal examples from the Phys. Ed. Department where I work.  Some kid, against instructions, climbs one of those apparatuses in the gym that fold out for climbing, and falls off:  instant ad-hoc regulation from the Board, banning the use of the apparatus.  Apparently the dangers inherent in not listening to safety instructions are overshadowed by those in everyday physical objects.  A celebrity dies on a ski slope after hitting her head – and immediately, all students in the Board are required to wear helmets for all  outdoor winter activities, which means that the annual ESL field trip to go ice skating is cancelled, because the recent immigrants to Canada (many of them refugees) are often too poor to afford sports equipment.  These kids survived war zones, and now aren’t allowed outside without helmets.  What is the actual rate of injury or death on ski hills in Canada?  Who cares?  A celebrity died, so it could happen to anybody, right?  The list of acceptable activities in Gym class is steadily shrinking.  Statistics (otherwise known as facts) play no apparent role in Board decisions of this type; only gut feelings of fear and probable danger hold sway.  Some simple research would tell you how many times an injury has occurred during a particular activity; then, divide that into the number of students who have participated in the activity.  This should give you some idea of the risk.  The number will rarely be zero, but if the activity has benefits (such as generating camaraderie, self-confidence, cooperation, etc.) that are significant, it’s usually worth enduring some slight risk in order to participate.  As Dan Gardner says, saying that something could happen is a meaningless statement.  It’s the probability of that event happening which ought to guide our responses.  Though I must say that the risk of litigation over rare incidents is much higher than the risk of the incidents themselves!  This is really a problem, and ought to be considered more carefully.

 It need not be said that the perceived risk of the effects of failure on students is exaggerated, by parents and administrators, as well as by students themselves.  It is often presented as the End of Dreams:  a shut door to the future, equivalent in many cases to the loss of hope.  The amazing self-absorption of many of us in the field of education astounds me daily.  Every person reading this probably knows at least one high school dropout who went on to live a perfectly happy and productive life.  The entrepreneurial world is full of them:  Angelfire.com lists 755 notable elementary- and high-school-dropouts on what it claims is the most comprehensive list ever compiled on the subject; it includes 25 billionaires, 8 U.S. Presidents (that’s about 18% of the total number of Presidents ever!), 28 knighthoods, 55 bestselling authors, 10 Nobel Prize winners, and an astronaut.  The number is of course tiny compared to all the students who did graduate, but there are plenty of non-graduates who are living good, though non-spectacular lives all over the world.  The increased expectation for children to attend university in Canada has had some serious effects on schools and on society, according to James Côté and Anton Allahar, authors of Ivory Tower Blues

 In universities, as well as in high schools, it has led to remarkable grade inflation.  The Ontario Scholar bursary, awarded to students who graduate with an average of 80% or better, is now awarded to over 40% of all graduates, making it nearly meaningless.  Back in the 1960s, when it was conceived, only about 5% of students managed it.  At the same time, professors’ satisfaction with the knowledge base of undergraduates is steadily decreasing.  A big part of this is due to the sheer numbers of students attending university; attendance at postsecondary institutions has increased over 900% since the 1950s, making undergraduate students comprise about the same percentage of the population in 2004 as high school students did back in 1950.  (Ivory Tower Blues, p.26)  And with grade inflation comes credentialism, where a diploma or degree is seen as either an end in itself (and not the learning that earns the degree), or else a stepping stone to later employment or social success.  Neither of these takes into account that intrinsic motivation for learning, in other words, genuine, applied, and focused attention and interest in a subject, is the only real way that long-term brain mapping is accomplished (what we might call actual learning).  Goal-oriented practices such as focusing on diplomas or even on grades have been clinically shown to actually decrease success in academic pursuits (see Alfie Kohn’s article, “From Degrading to De-Grading” in High School Magazine , March 1999, among others.  Available at Alfiekohn.org).  The problem is that they make you focus beyond what you’re doing to the activity’s results and even beyond, to the consequences of those results.  It’s a distraction.  And our society is good at distraction.  Note that being focused on the present, that zen-like Eastern mindset, is once again absent from the Western picture.

 I have actually had a principal tell me that I was not “getting the big picture”, which to her meant the four-year career of a student through high school to a diploma.  She had no real answer to my suggestion that a “big picture” ought reasonably to include the long-term well-being of students once they leave our halls.  Teachers, whose understanding of the process of learning is generally considerable, are the only ones who seem not to be as affected by this anxiety — that said, there is an enormous amount of pressure on educators not to assign failing grades.

 The practice of “Social Promotion” is badly understood by those within and without the system of education.  It is based on studies which appeared to show a correlation between students who were held back a year and those who eventually drop out of the system.  But a basic understanding of the term ‘correlation’ would help to disentangle some of the angst:  ‘correlation’ does not imply ‘causation’.  That is, one might expect to find that students who are disengaged from the learning process or from the environment of school for reasons of predisposition, stresses at home, a lack of support, etc. are the ones who are most likely to both fail courses and eventually drop out altogether.  The one does not necessarily cause the other to happen.  And yet many students are passed by administrators (often over the objections of subject teachers) despite the fact that they have not mastered the material covered by the course, on the assumption that their self-image will be damaged.  This has snowballing effects up the various grades and into universities, where less and less often professors are reporting satisfaction with the skills and knowledge base of undergraduates.  Despite studies which have shown that the causal link between repeating a grade and dropping out is tenuous at best, it might be true that the social cost of failing a grade and being held back is real, at least to some degree.  There is a maelstrom of debate about this, of course, but even assuming it does exist, it would seem to me to be more of a problem with the whole process of segregating students by age in the first place, rather than with the question of whether or not they are going to be left behind by their peers.  And there are good indications that the practice of passing people who know that they do not deserve to pass creates problems in self-esteem, which good psychologists know has to be genuine and earned in order to be beneficial.   Or, as James   Côté explains, it’s a difference between self-esteem and self-efficacy:

“The problem with the feel-good pedagogy of self-esteem is that it leads to neglect of basic pedagogical principles of learning and progressive skill acquisition.  In contrast to rewarding everyone regardless of how well the job is done, when a student learns the rudiments and masters the elements of a skill or area of knowledge, that person also acquires a sense of self-efficacy [, ] a sense that one can accomplish things and that those things are under one’s control.  [It] is thus a form of personal agency […] fromthis experience follows a realistic sense of self-esteem, and this sense of self-esteem is reinforced with every efficacious experience.  […]  People with high self-esteem, but low self-efficacy, must rely on continual feedback from others.”   (Ivory Tower Blues, p.70) 

 Our grandparents weren’t so risk averse.  “With the proliferation of graded schools in the middle of the 19th century, retention became a common practice. In fact, a century ago, approximately half of all American students were retained at least once before the age of 13”  (Rose, Janet S.; et al. “A Fresh Look at the Retention-Promotion Controversy.” Journal of School Psychology, v21 n3 p201-11 Fall 1983).  But this was in the days before the strange practice of age-apartheid in modern schools.  In the one-room schoolhouse, the older children provided behavioural models for the younger kids, as well as helping to teach them curriculum.  And if there’s one thing I have found out over more than a decade of teaching, it’s that if you want to know a subject well, you should teach it to someone else.

 Remember the numbers a few paragraphs back?  Undergraduate registration has risen 900% in 60 years, largely the result of the intellectual “arms race” of the Cold War.  The idea was that the supply of a large educated class would produce its own demand – but it didn’t.  Students and parents frequently are pushed (not pulled) toward university educations because of the rampant credentialism that tells them that a degree is like a passport to a good, white-collar job.  But though the number of undergraduates increased a hundred and fifty times in the last hundred years, the population of Canada only increased six times during that same period, and the number of white collar jobs (the supposed extrinsic aim of such an education) only increased by about 60%, and sits today at only about 16% of all jobs.  The story is a fib, in other words, and it’s one that causes disengagement and erosion of academic values, as well as a devaluation of the trades.  Students are in university for the wrong reasons, and even if they don’t drop out or fail in their first year (which nearly half do), there’s no guarantee of a job in their field after they graduate.

 So, those fears of a dark future without a high school diploma or a university degree are pretty much just that:  fear.  Whatever basis in reality it has is merely a self-fulfilling prophecy, and has no bearing on the actual state of affairs in Canada.  But when you have a massively risk-averse culture, and public policy that is too often based on emotions or ideology, rather than research, the result is a chaotic mess of anxieties and confusion and artificial pressures on students and teachers alike.  Combine that with the beneficial effects of a growth mindset – one that takes failure for granted, and actually depends on it for improvement and development – and you have a conundrum.

 That P.D. session I mentioned in the first of the blog entries under this title reinforces the point:  That well-meaning ex-clergyman wanted to spare children the pain of failure, and in doing so, took all the responsibility for that student’s success onto himself.  This may have been good for his own sense of martyrdom or of self-esteem based on his own perceived heroism, but it does little for the kids it’s supposed to help.

 Students don’t just survive failure.  They need it to learn.  And our overprotective attitude towards the topic hurts them in the long run.

11 Comments

Filed under education, Uncategorized

Ten Things Our Grandparents Got Right #3: They allowed us to fail

Part Three:  Individualism

Individualism in thought has for a long time been a hallmark of the West.  Much has been made of the contrast between the Confucian, community-oriented thinking of the East (broadly defined as Asia), and the Aristotelian, individual-oriented thinking in the West (more or less Europe and its colonies).  But studies by Richard Nisbett and others have demonstrated that these styles of thinking are also styles of perception!  That is, when Easterners and Westerners are shown the same image, they will later (on average) report having seen different things.  Asian observers tend, for example, to focus on more holistic, relationship-oriented patterns in the images, while Europeans and Americans will remember having seen discrete objects, and usually only the more prominent ones in terms of size and placement.  As an example, consider the following image, from Nisbett’s enlightening book, The Geography of Thought:       Does the cow more naturally associate itself in your mind with the grass or the chicken?  If you think like a Westerner, you’ll categorise, and assume that the two farm animals go together.  If you think more like an Asian, you’ll naturally assume that the relationship between the cow and its food is more important.  This goes very deep, and points to the massive impact that culture has on learning.  Western thought tends to view things in isolation:  Western medicine is focused on fixing things that have broken, and you pay your doctor when he has ‘fixed’ you.  Eastern medicine is focused on the holistic idea of health, and you pay your doctor while you are well, only neglecting payment when you fall ill.  It’s his job to keep you healthy, after all!  It all depends on what we’re conditioned to pay attention to.  Consider the (now hackneyed) video of the “attention test”, in which the viewer is asked to pay attention to how many times a basketball is passed between players.  While focused on this task, most people are absolutely blind to other parts of the video that are in plain view.  If you haven’t had a chance to see this yet, it’s worth trying:   .   Teaching, in my opinion, is the subversive activity with which we can free ourselves from entrenched patterns of thought and perception.   At least, it should be.

However, since the 1970s or earlier, there has been a trend towards the education of Western children in the spirit of increasing and enforcing their individualism.  No doubt this had sensible origins:  probably it was pushback from the kind of Stalag-like schools of the 1940s and 50s, where having your hair touch your collar was grounds for suspension.  I hope that nobody today really thinks that caning children in schools is a good idea, but there seems to have been (as is SO often the case in education) a too-broad generalization of good research, resulting in bandwagoning and foolish ideas.  In the 1950s, the suggestion that parents ought to have more of a say than teachers in matters of school discipline was laughable, a minority opinion that would have ostracised those who espoused it.  Now that it is the norm, the very same opinions that would have made somebody PART of an acceptable majority 60 years ago are the ones that could get you excluded from a conversation today.

But the changeover was not graceful, nor particularly well informed:  the Human Potential Movement, which many critics (such as Jean Twenge) point to as the origin of the “self-esteem” epidemic we are currently caught in, was not originally such a one-trick pony.  Aside from self-esteem (which used to be a very clinical term, unknown to any but psychologists), it espoused ideas such as learning to be in the moment, and to appreciate the here and now.  It advocated that individuals see themselves, and act, as part of a community.  Within those communities, it suggested that there be an effort to generate positive social change.  And, tellingly, it insisted that we try to have compassion for others.  Somehow (I suspect because of the way ideas are passed on by people who have not actually read the original documents in which they are expressed), “self-esteem” overshadowed all of those other laudable goals, to the extent that modern students’ capacity for empathy is at an all-time recorded low.  They are cut off from understanding the feelings of others.  Of course, this isolation reinforces our inability to judge risk effectively:  when we have only our own emotional reactions to go by, and when community is no longer available as a sounding-board, we are stuck with our own fears and with the media, which of course capitalises off of them.  It is interesting to note that the forgotten precepts of the Human Potential Movement are all what we would term “Asian”, almost Buddhist values:  zen, community, compassion:  it would seem that our Western perceptions were able only to remember and reinforce (perhaps through the confirmation bias) the precepts that were amiable to our preconceived modes of thought.

 The hierarchical structure of public schools also has an effect:  those wishing to ‘advance’ in their careers as administrators will have a very long, uphill battle to fight if they do not subscribe to the prevailing wisdom of self-esteem, incomplete and misunderstood as it is.  The field of education (rather ironically) is notorious in academic circles for its uncritical bandwagoning acceptance of various memes.  No doubt this is related to the control of education by politics, which in my opinion is a calamity that does more to hamper the progress of education than perhaps any other single factor.  Those of us teachers who have been in the business long enough to have seen several of these bandwagons come, go, get relabelled, and come again, are less likely to be fooled.  But the pressure from above to accept them still exists.

Finally, it must be conceded that media are commercially motivated entities.  We ‘consume’ media, and though there is a certain amount of what we call choice in this, it really is quite limited.  The ubiquitous “if it bleeds it leads” model of news does not really have an alternative in our culture.  There is no “good news channel” which would let us know just how unlikely it is to be killed or injured.  There is no newspaper which reports drops in crime rates or the lack of epidemics.  When we go to the grocery store, we feel like we have choice as well:  hundreds of different kinds of breakfast cereals, for example — most of which are owned by a handful of corporations, and few of which have significant differences in nutrition or even in taste.  The choices students and teachers make in the day-to-day running of the school system are all made from within a very narrow band of options, all of which support the status quo.  Something as simple as the choice of when we will relieve our bladders is made to be a big deal, and anything that fundamentally questions the school system as it currently is run will draw unwanted ire.  I think consumers, as well as students, know that their choices are really mostly meaningless.  I think they feel it on a fundamental level, even if they can’t identify it.  I think the need for a sense of real agency in your own life and world is absolutely essential for any kind of feeling of well-being:  you need to know that you can have a positive effect on your environment and on your life.  This is the #1 reason younger people give me for not voting:  they really feel helpless, though on the surface they appear to have choice.  Ironically, of course, they truly hold the balance of power in this country:  if they were to vote en masse in accordance with their conscience, by all accounts the political scene in Canada would be radically changed from what it is now.  Watch this five-minute video of former General (now Senator) Romeo Dallaire making this very point:  

The reality of their situation does not match their perception.  In addition, the very presence of so much choice is (rather counter-intuitively) making people more unhappy and angst-ridden:  with so many choices, the possibility of picking the perfect choice is seen as possible.  Regret, self-castigation, and uncertainty plague many decisions made by people in the West today.  In the East, where individual choice is not considered the apogee of social achievement, levels of anxiety are lower except where excessive parental control is involved.

My next blog entry will talk about the benefits of overcoming this risk-averse approach to education:  how failure is not only an acceptable, but a desired outcome when you are trying to actually learn.

Leave a comment

Filed under Uncategorized

Ten Things Our Grandparents Got Right #3: They allowed us to fail

Part Two:

Life with no consequences:

Let me rephrase that:  We live lives of unparalleled freedom from disease, accident, injury, and danger.  The kinds of immediate consequences our distant ancestors might have had to live with are mostly gone.  We are less likely to be sick or to die (or to be eaten by a lion) than any group of humans in the history of the race.  Murder, and war, and in fact all crime is down:  way down.  It’s hard to believe this when all you see on the news is depravity, but it’s true.  Cancer is down.  We’re living longer and healthier.  Even the threat of car crashes, which is the #1 killer of young people in Canada, is declining.  It became the #1 killer because the previous champion, disease, is much less likely today, and it, too, is less and less likely as time goes on.  We ought to be the most emotionally secure generation in history, and yet anxiety, particularly in children, is on the rise.  Our fears have become abstract, and it’s difficult to learn concrete lessons from abstractions.  We are, in fact, just abysmally, shockingly bad at understanding how risk works.  But it’s not entirely our fault:  our brains are working against us.

Dan Gardner, the author of the excellent book Risk: The Science and Politics of Fear, points out that psychological research into risk management by Paul Slovic and others, indicates that the brain uses two separate systems to assess danger.  System One, which Gardner terms “Gut”, uses quick-and-dirty techniques that would have been reinforced concretely on the Savannah thousands of years ago:  somebody mentions lions, and you remember that a tribe member was killed by a lion sometime in living memory.  So, avoid the tall grass.  Makes sense.  More ancient humans who followed this rule would have survived to pass on their genes.  Skeptics were admirable, but dead.  System Two, “Head”, rationalises and tempers the reactions of the “Gut” system, but cannot make the way we feel about danger truly commensurate with what statistics say about our safety.  Is there really likely to be a lion in the grass today?  Has anyone seen a lion lately?  Ramp the fear down a tad, but not to levels that reflect actual risk percentages.

The brain has not had the time in evolutionary terms to be able to deal with the kinds of abstract ‘dangers’ that we face from day to day, such as deadlines or UV rays.  Our fear about the safety of our children falls mostly into the “Gut”’s purview.  Paul Slovic made a list of 18 characteristics of activities or technologies that universally raised the perception of risk in people’s minds, regardless of the actual circumstances.  Children are right up there with “Accident History” and “Catastrophic Potential”.  The media (also on the list), unfortunately, is complicit in exaggerating risk, and parents are so terrified (for example) about their kids being abducted by strangers that it never occurs to them that the actual chances of that happening in Canada are statistically 1 in 5.8 million!  That’s a far smaller risk than is needed to dismiss it pretty much entirely.  It’s considered zero risk, or a risk de minimis in terms of probability studies:  a danger so minute that it disappears statistically.  But think of how much public policy and attitude is based on the idea that sexual attack and abduction of kids is common!

Or school shootings.  To my knowledge, there have only ever been ten acts of gun violence in Canadian schools since 1902.  The total death toll was 26, more than half of which came from a single incident at the École Polytechnique in Montréal.  One came from a school in Alberta where a friend of mine was teaching, eight days after the Columbine case in the U.S.  If you estimate the total number of students in Canadian schools since 1902 (hard to tell:  there are 5.2 million kids in school today, NOT counting universities and colleges; multiply that by 110 years and skim a bunch off for the smaller population in previous generations….you still get several hundreds of millions), and figure those 26 unfortunate people into that number, the chances of dying in a school shooting in Canada are too small for my calculator to measure without an error message.  But every year, we now have to suffer through “Lockdown Drills”, officiated by the police, where we all have to pretend there’s a maniac in the halls.  Time is wasted, kids are frightened, and money is spent for no good cause.  Remember, all violent crime is on the DEcrease, very dramatically.  Polls show that children’s safety at school is the single most common crime-related concern, and yet the school environment is statistically, indisputably, the safest place for kids – much safer than the home or the street.  In an attempt to rein in emotional overreaction, the APA, in 2006, issued a resolution calling for the modification of the “zero tolerance” attitude toward discipline, because it was shown to increase bad behaviour as well as drop-out rates!

But no principal or Education Minister would be able to advance his career by quoting the astronomically low probability of injury or death at school.  They’d be accused of ignoring the ‘problem’, as if one existed.  In my experience, principals spend half of their time being afraid of parents, and the other half being afraid of lawyers.  The way the laws in Ontario are written, the buck stops squarely at them in the case of any major incident involving students or teachers under their purview.

This leads me to the subject of the fear of litigation.  No matter how outlandishly rare or unlikely a scenario, when you have 7 billion people on the planet, chances are it’s going to happen somewhere.  And chances are, when it does, you will be sued with very little regard to the simple truth that sometimes accidents happen, and it need not be anybody’s fault.  And certainly, voicing the opinion that, even if it IS somebody’s fault, the benefits incurred from participating in a risky activity can outweigh occasional harm can get you branded as a child-hater.  “Acceptable risk” is a phrase we don’t hear enough of in public discourse.  It’s important to realise that there is no such thing  as zero risk.  There is no such thing as perfectly safe – there are only degrees of risk.  And yet, Daniel Krewski, an epidemiologist at the University of Ottawa, conducted surveys in which he found that a majority of Canadians believe that a risk-free world is not only possible, but that they expect the government to provide it for them.  In a universe where the total safety of all children at all times is not only assumed to be possible, but necessary, any harm is the fault of human error in judgement, and an unforgivable sin.  Like the Puritans, our society has no mechanism at all for the expiation of sin – other than the sacrifice of a scapegoat.  Arthur Miller in Act One of his play The Crucible, put his finger on it precisely: 

“Ours is a divided empire,” he says, “in which certain ideas and emotions and actions are of God, and their opposites are of Lucifer. It is as impossible for most men to conceive of a morality without sin as of an earth without ‘sky’. Since 1692 a great but superficial change has wiped out God’s beard and the Devil’s horns, but the world is still gripped between two diametrically opposed absolutes. The concept of unity, in which positive and negative are attributes of the same force, in which good and evil are relative, ever-changing, and always joined to the same phenomenon – such a concept is still reserved to the physical sciences and to the few who have grasped the history of ideas”. 

It reinforces our own sense of righteousness when we blame others for accidents which might have been inevitable, unpredictable, or unlikely.  The lawsuits that result from such common errors in thinking only add to the general insanity.  The constraints of anxiety, paperwork, and expectations on even high school field trips are so crippling that I am amazed that any of my colleagues still go through with them.  I am told that teachers, late in life, have a higher rate of health concerns related to the bladder, on account of the fact that we are told that we are never to leave our classrooms, even briefly, to relieve ourselves, without finding somebody (who?) to watch our students – for fear that “something may happen”, and we’d be on the legal hook.  Most of us just hold it.  It occurs to me that if our society were more focused on compassion and empathy, we would reduce the need or the compulsion for litigation, and I am sure that our collective anxiety would lessen enormously.  I am a little disappointed that this is never the subject of public discussion….which brings me to the next point, in the next posting, on the subject of individualism and the media.

Leave a comment

Filed under Uncategorized