Tag Archives: evidence based education

Ten Things Our Grandparents Got Right #5: They Didn’t Treat Teenagers Like Infants

Picking up from my last post’s ellipsis, I feel I need to address the infantilisation and outright ageism displayed by adults toward teenagers. This rather repugnant reincarnation of genetic determinism (for which there is no good evidence, and against which Stephen J. Gould spent much of his career combatting) is particularly dunderheaded when you take into account the plasticity of the brain, just now beginning to be understood. “Don’t talk for more than ten minutes on any subject”, we were told in teachers’ college, “because the adolescent brain has an attention span that tiny, and there is nothing anyone can do about it”. The contradictory complaints that attention spans are getting smaller, often iterated by the same people, never seemed to present a serious challenge to the accepted wisdom. Surely if they can shrink, they can also grow.

Under the deterministic model, adolescents’ potential is viciously undercut, and a condescending attitude of pandering to existing biases, tastes, knowledge, interests, and capabilities is adopted, with real change or growth is almost completely negated. “Teenagers are lazy and surly because of physiology, or perhaps hormones”, people say, though there is absolutely no record of this being true  either in historical records or even in other cultures existing today. And the list goes on: teenagers are incapable of making rational decisions because of brain chemistry, not because they are systematically denied the opportunity to practice making good decisions on a daily basis. Teenagers are not punctual because of circadian rhythms unique to them, and not because of poor sleep and nutrition habits that are actively encouraged by our society.   Here’s an article that actually suggests that teens’ supposedly biologically based inability to process risk effectively might be the result of sleep deprivation!   Knowing the effects of sleep deprivation on the human psyche (I am an ex-soldier), it surprises me that nobody has made that link earlier.  I have heard more than once the lament that “if we really wanted teenagers to pay attention, we wouldn’t hold classes before ten o’clock”. Recently, this has been challenged, the only question remaining being how such a parochial view could have survived this long. Anyone who travels outside of North America or who has read history (think Alexander the Great, Joan of Arc, Augustus Caesar, Mary Shelley, Louis Braille, et al.) will shake his head at such statements of the inevitability of “the teenage brain” and its limitations. The list of historically significant teenagers is as long as my arm, at least until about the middle of the twentieth century, when they suddenly became incapable of surviving the most basic of situations, such as wearing hats that light up .  This photo is taken from the wonderful Free Range Kids blog, which also has so many first-hand anecdotes of infantilising in America that it sometimes makes me afraid to read it.

Fourteen-year-olds would burst into flames.

For starters, the concept of the teenager as a separate class of individual, or a distinct stage in life, is a very recent coinage – nobody used the term before 1941. Now the invention of the ‘tween’ is pushing it even further: it is only attested to since 1988. The theories as to exactly what purpose the invention of such a repugnant, incapable figure was supposed to serve vary, but John Taylor Gatto, Neil Postman, and Dr Robert Epstein have some suggestions.

Postman, in The Disappearance of Childhood, argues eloquently for the phenomenon of childhood in general being a socially constructed event. He points out that the idea of childhood, a time of life in which one is supposed to be controlled by a sense of shame and protected from such things as knowledge of adult sexuality, was a product of the end of the Middle Ages and the rise of the printed word. A boy of seven years old was, for all practical purposes besides “making love and war”(p.15), capable of every meaningful act in Mediaeval society. He could speak, and do labour, and in a predominantly oral culture, these are all that are needed for maturity and inclusion in the social structure. There is no need or possibility in a Mediaeval culture for the keeping of secrets; privacy was hardly a concept at all, and close quarters and the lack of any need of reading skill made knowledge a general commodity.

But when the written word became the new means to record, keep, and guard the culture’s knowledge base, institutions like educational systems were needed to induct the child into the world of adults. This effectively stretched the time of childhood from seven years to the end of schooling. As Postman points out, before the 16th century, there were “no books on child-rearing, and exceedingly few about women in their role as mothers […] There was no such thing as children’s literature […] , no books on pediatrics. […] Paintings consistently portrayed children as miniature adults […] The language of adults and children was also the same. There are […] no references anywhere to children’s jargon prior to the 17th century, after which they are numerous.” (18). Children did not go to school, because there was nothing to teach them. But now the definition of childhood changed, from one based on linguistic incompetence to one based on reading incompetence. Instead of just becoming an adult by ageing, children had to earn adulthood through education – and the European states invented schooling to accomplish this new process. Childhood, as Postman notes, became “a necessity” (36).

Later, with industrialisation, threats to this newfound idea of childhood emerged. The new urban demand for factory and mine workers supported the “penal” aspects of schooling to break the will of the child and accustom him to the routine labour of factory work. In response, child labour laws were introduced, enshrining the concept of the sacrosanct nature of childhood. Though Postman sees the growth of elementary education after 1840 as evidence of the triumph of the notion of childhood over industrial capitalist concerns, J.T. Gatto sees it somewhat differently.

Gatto, an award-winning teacher who speaks now against institutionalised education, argues that the modern American education system never outgrew its penal origins, and in fact goes further, saying that the system is set up more or less deliberately to bring about the class of uncritical, bored, dissatisfied consumers that is important for the corporate model of capitalism to flourish. Children were being actively groomed by industrial influencers of education systems to become not citizens or human beings, but “human resources”, to be moulded to fit something called a “workplace”, “though for most of American history American children were reared to expect to create their own workplaces.”

The subdivision of childhood into adolescence, and now, pre-adolescence (the “tween” phenomenon) is something that Robert Epstein has written on. Epstein, in his book The Case Against Adolescence , argues from the point that Postman and Gatto leave off, during the industrialisation of America. He sees the creation of the adolescent as a kind of benevolent but destructive side effect of the social reforms that were reacting against the admitted evils of the Industrial Revolution with regards to children’s rights. The creation of institutions such as child labour laws, compulsory education, the juvenile justice system, and the age-specific restrictions of “adult” activities such as driving, drinking alcohol, and smoking, according to Epstein, had the effect of isolating the child’s world from the adults’ almost totally. They are confined to a mostly (by definition) developmentally incomplete peer group, and their dependency is extended by more than a decade before they are required to enter the adult world after school – this despite the fact that their sexual maturity and mental readiness for such a transition are evident from a much earlier age. In a study, Epstein found that teenagers have ten times as many restrictions placed upon their behaviour as normal adults, and twice the number as felons and soldiers! The rise of incidences of such restrictions exactly parallel industrialisation, and jump significantly after World War II.

Epstein picks up an argument from Postman, and suggests that the studies that purport to “show” the biological cause of the supposedly innate surliness and incapacity of teenagers are flawed, in that they show only correlation, not causation. In fact, given the plastic nature of the brain, I myself would expect to find that such correlations are in fact backwards, meaning that the social restrictions on teen behaviour are in fact to blame for the state of their brains. The argument that brain scans “prove” the innate uselessness of teenagers in such areas as risk assessment or impulse control sound to me about as useful as “scanning” the musculature of a teenager who has never lifted weights, and declaring them “unfit” and biologically incapable of ever being an athlete.

In fact, the list of famous “characteristics” of teens proves to be mostly made up. Margaret Mead points out in her studies of adolescents in Samoa that the traditional “storm and stress” of North American teenage development is nowhere to be found in that culture, or any other preliterate culture. There is no term for adolescence in the majority of such societies. Even the list of undesirable teen behaviour in our own society, summarised by Philip Graham as “identity confusion, extreme moodiness and high rates of suicide, violent discord with parents, aggressive behaviour, intellectual and emotional immaturity, risk taking, and sexual promiscuity arising from the raised secretion of sex hormones” has been shown to be common to less than 20% of the age group in question. Hardly a useful list of descriptors, then.

And as to other biased assumptions about teenage behaviour, such as the idea that they are addicted to, and are misusing, technology?  Turns out we ought to have a good look in the mirror here too:  Adults were found in a study to abuse technology at a higher rate than their kids.  

Why are these ideas so pervasive and so tenacious? The original study of adolescence was done in 1904, by G. Stanley Hall. He observed the turmoil on American streets due to industrialisation and massive waves of immigration, unsupported by proper social structures. Concluding from this that all adolescents necessarily exhibited those nasty characteristics mentioned above, he drew on the now-long-debunked theory of biological “recapitulation”, in which the development of the individual mirrored the development of the species. In that model, adolescence “recapitulated” a savage, pre-civilised phase of the development of Homo Sapiens, and it would be expected that such a period would bring with it turmoil. He borrowed from the German Romantic idea of Sturm und Drang and applied it universally to all teens, claiming biology as the cause. Though the field of biology has long since abandoned such theories, the general public has not kept pace.

Of course, I have also found through my years of teaching that what is expected of a person is generally what one will get. As Eliza Doolittle says in Shaw’s Pygmalion,

“You see, really and truly, apart from the things anyone can pick up (the dressing and the proper way of speaking, and so on), the difference between a lady and a flower girl is not how she behaves, but how she’s treated. I shall always be a flower girl to Professor Higgins, because he always treats me as a flower girl, and always will; but I know I can be a lady to you, because you always treat me as a lady, and always will.”

Sadly, we live in a culture where the treatment of young adults is infantilising (do the test here!), demeaning, controlling, and stultifying. Perhaps it’s not entirely adults’ fault, though; as Postman points out, adults are the result of the same process of education that we subject our children to. Whereas once literacy was the dividing line between childhood and adulthood (ideas that were enshrined into the notion of the creation of a free state during the American Revolution), industrialisation also brought with it technologies that made actual familiarity with the written word obsolete. The telegraph, radio, television, and the Internet have taken over from where literacy left off, producing generations of adults who have had unfettered access to information, but no sequential, age-appropriate introduction to discerning its meaning. The very definition of childhood as an idea, not just a biological stage of individual evolution as it is now conceived of, depended on slowly being indoctrinated into greater knowledge through increasingly complex mastery of literacy. Now, who actually reads anything by people like Barack Obama, Stephen Harper, George Bush, or Ronald Reagan? Would they be rewarded if they did? Though our Canadian society has succeeded in producing generations of functionally literate people, we are increasingly reverting to a Mediaeval-style oral culture, in which even people who can read, generally do not, and most of those who do, cannot do so very well. The line between childhood and adulthood is blurred, and brain scans show an adolescent development well into the mid-twenties of North American subjects — “coincidentally”, this is about the same time as many post-secondary students are leaving school.   I would dearly love to see brain scans of people other than the Westernised college students who are the typical subjects of such studies. My intuition is that they would be vastly different at comparable ages.

The assumption that the tastes and interests of a teenager are equally fixed, never to grow, was made clear to me in a textbook on English grammar much in use several years ago, in which every sentence, in order to be palatable to what grammar-textbook publishers assumed teenagers’ interests were, had to have something to do with skateboarding. To me, this attitude is no better ethically speaking, and has just about as much science behind it, as the old idea of the genetic inferiority of slaves. The problem is, with bandwagoning, it’s difficult to get off the wagon, or out of its way. I once had a principal (a fellow with a science background, who ought to have known better) who hawked these unpleasant wares at every staff meeting and P.D. day, much to my annoyance. Years later, after his retirement, he admitted to me that he knew full well all along that it was bunk, but claimed that he found it a useful tool for management. He told me that “we have to work with something” – a foolish imperative that always makes me think of the show Yes Minister, where it was put in syllogistic form:

1. Something must be done.
2. This is something.
3. Therefore, this must be done. 

Teaching is often a surreal experience.

We were presented in Teachers’ College with the interesting model of Howard Gardner’s Multiple Intelligences, and told that we must adjust our teaching techniques to all of them, regardless of their relevance or applicability, because “students can only learn in certain ways”. Every lesson had to touch on as many of the Intelligences as possible, and administrators’ evaluations of teachers would be based on a handy checklist and cursory observation. Imagine trying to incorporate kinaesthetic learning into a lesson on punctuation or grammar! This led to all kinds of silliness , like hopping up and down to simulate semicolons, from which the better teachers miraculously managed to salvage some memorable learning experiences. Since then, Gardner’s theory has come under closer scrutiny, and has been largely debunked, at least in the absolutist terms under which it was adopted in schools. Here’s a quick video outlining the basic flaws in the theory: 

Far from being deterministic learning “styles”, they appear to be mere preferences, and there is no good evidence that pounding a round lesson into one of its square holes does anything to help learning at all. Instead, a good teacher will understand which kinds of tools are applicable and effective, given the nature of the ideas or skills being taught. In other words, according to Professor Daniel Willingham, “While there’s little evidence that matching one’s teaching style to one’s students’ learning styles helps them learn, there’s much stronger evidence that matching one’s teaching style to one’s content is wise.”

Why this obviously silly meme has stuck around for so long, and had such an impact on systems of education is a bit of a mystery, but I have the following observations, which might shed some light: The first half of the equation comes from good intentions, I think: most teachers or educators feel a calling and a social responsibility to their profession. We’re often caring to a fault, and this is an example of the ‘fault’: our predisposition to believe that our job involves finding the “hidden learner” in every student blinds us to the lack of evidence for this particular incarnation of that impulse. A kind of Confirmation Bias, if you will. The idea of Multiple Intelligences (which is a description of ability, not of style), bent slightly to suit our notion of being teachers who care deeply about individual students’ learning, is powerfully appealing. We want to believe in it, because it reinforces pre-existing beliefs that we have brought to our profession, but regardless of how admirable those beliefs might be from an ethical standpoint, if they do not fit the actual facts, they ought to be altered or abandoned. Recently, a study by Daniel B. Klein of George Mason University uncovered what he thought was a type of intellectual bias in Liberal-minded respondents to a survey. When it was pointed out to him that the survey he had provided might be biased, he re-wrote it, and found bias in those of Conservative bent. Then he wrote with some humility and intellectual frankness about his own Confirmation Bias – two attributes that my profession could certainly benefit from.

The second part of the reason this meme is so prevalent in schools, in my opinion, is not because it is correct, nor because it is touted by teacher ed. texts, (Daniel Willingham has looked at the course syllabi of Teacher Ed. Courses and found no evidence of it being ‘officially’ sanctioned) but because of the management models of evaluating teaching ability. When a principal is charged with evaluating the prowess of the teachers in his or her school, and has to report those findings upward to his or her own “managers”, the same silliness happens as when we are evaluating our students: we want to fall back on measurables. It’s a lot easier to carry a clipboard into a teacher evaluation and tick off “yes” or “no” to a question like, “Does the teacher address the students’ learning styles individually?” than to actually make complex judgements about a very fluid and complicated problem like evaluating “good teaching”. So it’s partly a question of efficiency, just as being forced by the requirements of reporting student learning (a vastly complex and mostly abstract concept) in terms of percentile grades results in us asking stupid questions on tests that focus only on measurable, concrete facts, rather than on the rather more important aspects of higher-level thinking. I once asked a question on a test that required students to place in order several events from a novel we were studying in class: something that assessed both their memory of the details of their reading, as well as their understanding of the cause-and-effect relationship between the events. I was forced to abandon the perfectly valid question because it is essentially ungradeable – as soon as one event is out of order, a domino effect takes place and makes it impossible to give a numerical evaluation of how close to being ‘right’ the student was. If anecdotal comments, or even a conversation, were the method of relaying to a student the quality of their understanding, I wouldn’t have lost a potentially valuable assessment tool. A managerial model of reporting quantifiables upward on a chain of command, ultimately to a political bureaucracy, just does not work when dealing with something as complex as human learning.

Partly, though, it’s more insidious than just the self-perpetuating efficiency of a system. Sadly, the two halves of the equation often come together in unsavory ways: when the principal asks “Is the teacher hitting enough of the learning styles in his lessons?” the implied subtext is often, “Is the teacher caring enough toward his students?” This puts a lot of pressure for the meme to become accepted, or at least unquestioned, in teacher circles, at least when administration is present. It’s an unspoken type of ad hominem : between the lines is the question, “Do you really care about children?” I think this is the method of preservation of a lot of silly educational buzzwords, actually: they’re tied to teacher performance reviews. A lot of it is just lip service, as is suggested by the number of teachers who in private conversations will question the meme, but it still has an effect.

I am calling here for a greater intellectual and moral courage on the part of teachers to stand up against policy that is not evidence-based.  Here in Canada, under a government that is apparently actively anti-evidence, this is a tall order.  But we’ve got to start.

Advertisements

Leave a comment

Filed under Uncategorized

Ten Things Our Grandparents Got Right #3: They allowed us to fail

Part Three:  Individualism

Individualism in thought has for a long time been a hallmark of the West.  Much has been made of the contrast between the Confucian, community-oriented thinking of the East (broadly defined as Asia), and the Aristotelian, individual-oriented thinking in the West (more or less Europe and its colonies).  But studies by Richard Nisbett and others have demonstrated that these styles of thinking are also styles of perception!  That is, when Easterners and Westerners are shown the same image, they will later (on average) report having seen different things.  Asian observers tend, for example, to focus on more holistic, relationship-oriented patterns in the images, while Europeans and Americans will remember having seen discrete objects, and usually only the more prominent ones in terms of size and placement.  As an example, consider the following image, from Nisbett’s enlightening book, The Geography of Thought:       Does the cow more naturally associate itself in your mind with the grass or the chicken?  If you think like a Westerner, you’ll categorise, and assume that the two farm animals go together.  If you think more like an Asian, you’ll naturally assume that the relationship between the cow and its food is more important.  This goes very deep, and points to the massive impact that culture has on learning.  Western thought tends to view things in isolation:  Western medicine is focused on fixing things that have broken, and you pay your doctor when he has ‘fixed’ you.  Eastern medicine is focused on the holistic idea of health, and you pay your doctor while you are well, only neglecting payment when you fall ill.  It’s his job to keep you healthy, after all!  It all depends on what we’re conditioned to pay attention to.  Consider the (now hackneyed) video of the “attention test”, in which the viewer is asked to pay attention to how many times a basketball is passed between players.  While focused on this task, most people are absolutely blind to other parts of the video that are in plain view.  If you haven’t had a chance to see this yet, it’s worth trying:   .   Teaching, in my opinion, is the subversive activity with which we can free ourselves from entrenched patterns of thought and perception.   At least, it should be.

However, since the 1970s or earlier, there has been a trend towards the education of Western children in the spirit of increasing and enforcing their individualism.  No doubt this had sensible origins:  probably it was pushback from the kind of Stalag-like schools of the 1940s and 50s, where having your hair touch your collar was grounds for suspension.  I hope that nobody today really thinks that caning children in schools is a good idea, but there seems to have been (as is SO often the case in education) a too-broad generalization of good research, resulting in bandwagoning and foolish ideas.  In the 1950s, the suggestion that parents ought to have more of a say than teachers in matters of school discipline was laughable, a minority opinion that would have ostracised those who espoused it.  Now that it is the norm, the very same opinions that would have made somebody PART of an acceptable majority 60 years ago are the ones that could get you excluded from a conversation today.

But the changeover was not graceful, nor particularly well informed:  the Human Potential Movement, which many critics (such as Jean Twenge) point to as the origin of the “self-esteem” epidemic we are currently caught in, was not originally such a one-trick pony.  Aside from self-esteem (which used to be a very clinical term, unknown to any but psychologists), it espoused ideas such as learning to be in the moment, and to appreciate the here and now.  It advocated that individuals see themselves, and act, as part of a community.  Within those communities, it suggested that there be an effort to generate positive social change.  And, tellingly, it insisted that we try to have compassion for others.  Somehow (I suspect because of the way ideas are passed on by people who have not actually read the original documents in which they are expressed), “self-esteem” overshadowed all of those other laudable goals, to the extent that modern students’ capacity for empathy is at an all-time recorded low.  They are cut off from understanding the feelings of others.  Of course, this isolation reinforces our inability to judge risk effectively:  when we have only our own emotional reactions to go by, and when community is no longer available as a sounding-board, we are stuck with our own fears and with the media, which of course capitalises off of them.  It is interesting to note that the forgotten precepts of the Human Potential Movement are all what we would term “Asian”, almost Buddhist values:  zen, community, compassion:  it would seem that our Western perceptions were able only to remember and reinforce (perhaps through the confirmation bias) the precepts that were amiable to our preconceived modes of thought.

 The hierarchical structure of public schools also has an effect:  those wishing to ‘advance’ in their careers as administrators will have a very long, uphill battle to fight if they do not subscribe to the prevailing wisdom of self-esteem, incomplete and misunderstood as it is.  The field of education (rather ironically) is notorious in academic circles for its uncritical bandwagoning acceptance of various memes.  No doubt this is related to the control of education by politics, which in my opinion is a calamity that does more to hamper the progress of education than perhaps any other single factor.  Those of us teachers who have been in the business long enough to have seen several of these bandwagons come, go, get relabelled, and come again, are less likely to be fooled.  But the pressure from above to accept them still exists.

Finally, it must be conceded that media are commercially motivated entities.  We ‘consume’ media, and though there is a certain amount of what we call choice in this, it really is quite limited.  The ubiquitous “if it bleeds it leads” model of news does not really have an alternative in our culture.  There is no “good news channel” which would let us know just how unlikely it is to be killed or injured.  There is no newspaper which reports drops in crime rates or the lack of epidemics.  When we go to the grocery store, we feel like we have choice as well:  hundreds of different kinds of breakfast cereals, for example — most of which are owned by a handful of corporations, and few of which have significant differences in nutrition or even in taste.  The choices students and teachers make in the day-to-day running of the school system are all made from within a very narrow band of options, all of which support the status quo.  Something as simple as the choice of when we will relieve our bladders is made to be a big deal, and anything that fundamentally questions the school system as it currently is run will draw unwanted ire.  I think consumers, as well as students, know that their choices are really mostly meaningless.  I think they feel it on a fundamental level, even if they can’t identify it.  I think the need for a sense of real agency in your own life and world is absolutely essential for any kind of feeling of well-being:  you need to know that you can have a positive effect on your environment and on your life.  This is the #1 reason younger people give me for not voting:  they really feel helpless, though on the surface they appear to have choice.  Ironically, of course, they truly hold the balance of power in this country:  if they were to vote en masse in accordance with their conscience, by all accounts the political scene in Canada would be radically changed from what it is now.  Watch this five-minute video of former General (now Senator) Romeo Dallaire making this very point:  

The reality of their situation does not match their perception.  In addition, the very presence of so much choice is (rather counter-intuitively) making people more unhappy and angst-ridden:  with so many choices, the possibility of picking the perfect choice is seen as possible.  Regret, self-castigation, and uncertainty plague many decisions made by people in the West today.  In the East, where individual choice is not considered the apogee of social achievement, levels of anxiety are lower except where excessive parental control is involved.

My next blog entry will talk about the benefits of overcoming this risk-averse approach to education:  how failure is not only an acceptable, but a desired outcome when you are trying to actually learn.

Leave a comment

Filed under Uncategorized

Ten Things Our Grandparents Got Right #3: They allowed us to fail

Part Two:

Life with no consequences:

Let me rephrase that:  We live lives of unparalleled freedom from disease, accident, injury, and danger.  The kinds of immediate consequences our distant ancestors might have had to live with are mostly gone.  We are less likely to be sick or to die (or to be eaten by a lion) than any group of humans in the history of the race.  Murder, and war, and in fact all crime is down:  way down.  It’s hard to believe this when all you see on the news is depravity, but it’s true.  Cancer is down.  We’re living longer and healthier.  Even the threat of car crashes, which is the #1 killer of young people in Canada, is declining.  It became the #1 killer because the previous champion, disease, is much less likely today, and it, too, is less and less likely as time goes on.  We ought to be the most emotionally secure generation in history, and yet anxiety, particularly in children, is on the rise.  Our fears have become abstract, and it’s difficult to learn concrete lessons from abstractions.  We are, in fact, just abysmally, shockingly bad at understanding how risk works.  But it’s not entirely our fault:  our brains are working against us.

Dan Gardner, the author of the excellent book Risk: The Science and Politics of Fear, points out that psychological research into risk management by Paul Slovic and others, indicates that the brain uses two separate systems to assess danger.  System One, which Gardner terms “Gut”, uses quick-and-dirty techniques that would have been reinforced concretely on the Savannah thousands of years ago:  somebody mentions lions, and you remember that a tribe member was killed by a lion sometime in living memory.  So, avoid the tall grass.  Makes sense.  More ancient humans who followed this rule would have survived to pass on their genes.  Skeptics were admirable, but dead.  System Two, “Head”, rationalises and tempers the reactions of the “Gut” system, but cannot make the way we feel about danger truly commensurate with what statistics say about our safety.  Is there really likely to be a lion in the grass today?  Has anyone seen a lion lately?  Ramp the fear down a tad, but not to levels that reflect actual risk percentages.

The brain has not had the time in evolutionary terms to be able to deal with the kinds of abstract ‘dangers’ that we face from day to day, such as deadlines or UV rays.  Our fear about the safety of our children falls mostly into the “Gut”’s purview.  Paul Slovic made a list of 18 characteristics of activities or technologies that universally raised the perception of risk in people’s minds, regardless of the actual circumstances.  Children are right up there with “Accident History” and “Catastrophic Potential”.  The media (also on the list), unfortunately, is complicit in exaggerating risk, and parents are so terrified (for example) about their kids being abducted by strangers that it never occurs to them that the actual chances of that happening in Canada are statistically 1 in 5.8 million!  That’s a far smaller risk than is needed to dismiss it pretty much entirely.  It’s considered zero risk, or a risk de minimis in terms of probability studies:  a danger so minute that it disappears statistically.  But think of how much public policy and attitude is based on the idea that sexual attack and abduction of kids is common!

Or school shootings.  To my knowledge, there have only ever been ten acts of gun violence in Canadian schools since 1902.  The total death toll was 26, more than half of which came from a single incident at the École Polytechnique in Montréal.  One came from a school in Alberta where a friend of mine was teaching, eight days after the Columbine case in the U.S.  If you estimate the total number of students in Canadian schools since 1902 (hard to tell:  there are 5.2 million kids in school today, NOT counting universities and colleges; multiply that by 110 years and skim a bunch off for the smaller population in previous generations….you still get several hundreds of millions), and figure those 26 unfortunate people into that number, the chances of dying in a school shooting in Canada are too small for my calculator to measure without an error message.  But every year, we now have to suffer through “Lockdown Drills”, officiated by the police, where we all have to pretend there’s a maniac in the halls.  Time is wasted, kids are frightened, and money is spent for no good cause.  Remember, all violent crime is on the DEcrease, very dramatically.  Polls show that children’s safety at school is the single most common crime-related concern, and yet the school environment is statistically, indisputably, the safest place for kids – much safer than the home or the street.  In an attempt to rein in emotional overreaction, the APA, in 2006, issued a resolution calling for the modification of the “zero tolerance” attitude toward discipline, because it was shown to increase bad behaviour as well as drop-out rates!

But no principal or Education Minister would be able to advance his career by quoting the astronomically low probability of injury or death at school.  They’d be accused of ignoring the ‘problem’, as if one existed.  In my experience, principals spend half of their time being afraid of parents, and the other half being afraid of lawyers.  The way the laws in Ontario are written, the buck stops squarely at them in the case of any major incident involving students or teachers under their purview.

This leads me to the subject of the fear of litigation.  No matter how outlandishly rare or unlikely a scenario, when you have 7 billion people on the planet, chances are it’s going to happen somewhere.  And chances are, when it does, you will be sued with very little regard to the simple truth that sometimes accidents happen, and it need not be anybody’s fault.  And certainly, voicing the opinion that, even if it IS somebody’s fault, the benefits incurred from participating in a risky activity can outweigh occasional harm can get you branded as a child-hater.  “Acceptable risk” is a phrase we don’t hear enough of in public discourse.  It’s important to realise that there is no such thing  as zero risk.  There is no such thing as perfectly safe – there are only degrees of risk.  And yet, Daniel Krewski, an epidemiologist at the University of Ottawa, conducted surveys in which he found that a majority of Canadians believe that a risk-free world is not only possible, but that they expect the government to provide it for them.  In a universe where the total safety of all children at all times is not only assumed to be possible, but necessary, any harm is the fault of human error in judgement, and an unforgivable sin.  Like the Puritans, our society has no mechanism at all for the expiation of sin – other than the sacrifice of a scapegoat.  Arthur Miller in Act One of his play The Crucible, put his finger on it precisely: 

“Ours is a divided empire,” he says, “in which certain ideas and emotions and actions are of God, and their opposites are of Lucifer. It is as impossible for most men to conceive of a morality without sin as of an earth without ‘sky’. Since 1692 a great but superficial change has wiped out God’s beard and the Devil’s horns, but the world is still gripped between two diametrically opposed absolutes. The concept of unity, in which positive and negative are attributes of the same force, in which good and evil are relative, ever-changing, and always joined to the same phenomenon – such a concept is still reserved to the physical sciences and to the few who have grasped the history of ideas”. 

It reinforces our own sense of righteousness when we blame others for accidents which might have been inevitable, unpredictable, or unlikely.  The lawsuits that result from such common errors in thinking only add to the general insanity.  The constraints of anxiety, paperwork, and expectations on even high school field trips are so crippling that I am amazed that any of my colleagues still go through with them.  I am told that teachers, late in life, have a higher rate of health concerns related to the bladder, on account of the fact that we are told that we are never to leave our classrooms, even briefly, to relieve ourselves, without finding somebody (who?) to watch our students – for fear that “something may happen”, and we’d be on the legal hook.  Most of us just hold it.  It occurs to me that if our society were more focused on compassion and empathy, we would reduce the need or the compulsion for litigation, and I am sure that our collective anxiety would lessen enormously.  I am a little disappointed that this is never the subject of public discussion….which brings me to the next point, in the next posting, on the subject of individualism and the media.

Leave a comment

Filed under Uncategorized

Ten Things Our Grandparents Got Right #3: They allowed us to fail

 Part One:  Origins

 I once went to a Professional Development session led by a man whose purpose, it seemed, was to instil in us a profound sense of guilt.  He was a clergyman who had gone into teaching late in life upon leaving the church, and apparently had leapt into his second vocation with all the gusto that he had his first.  Teaching, to him, was evidently a clarion call to save as many souls as he could.  His passion was evident as he told us, lip quivering behind his beard and eyes nearly in tears behind thick glasses, the following anecdote:

“I once had a student who received a failing grade at the end of term.  [I’ll edit out some of the more saccharine details concerning the socio-economic status of the student in question.]  When he came to see me about his options for summer school, he asked me, ‘Sir, why did you fail me?’  And then it hit me:  I…had FAILED…this student.” 

 There was a dramatic pause as he waited for the play on words to sink in.  I think the looks of shock on our faces must have confused the poor fellow.

 Unfortunately, this is exactly the sort of P.D. session that has been found to work with teachers time after time.  We are, by and large, an unusually sensitive group of people, with a more-than-average propensity for caring, self-sacrifice, and a sense of duty to society.  Most teachers get into the profession with altruistic motives.  The best teachers I know are constantly questioning themselves, wondering if they could have done something more, something better, to help kids in need; re-working lesson plans, staying for extra help sessions, making impassioned pleas for assistance from parents to allow their children to succeed.  We are highly susceptible to guilt.   The educational environment at present, too, adds to this.  Sometimes it seems like there is a kind of a foot-race of martyrdom amongst teachers and administrators, with everybody eager to demonstrate that they care more than their colleagues about the darling “children” (who, by the time they reach us, are well into the age bracket that includes drinking, smoking, driving, sex, and voting – though hopefully not all at once).

 Some of this is genuine – to be fair, a lot of it is genuinely altruistic.  We do care; we care a lot.  Many of us have given up careers in other fields because we felt a calling.  It’s real.  It matters.  But some of it is overdone – altruism to a fault, as they say.  Some is self-serving, and worse, some is actually cynical.  Between the glurge (“Do it for the poor, suffering children!”) and the narcissistic (“I’M going to save the poor children!”), there is the manipulative (“Why aren’t you doing more for the poor children?”).  All of these attitudes are major encumbrances when you’re trying to nurture actual kids.  The title of this blog is “The Nature of Nurturing” – I contend that there is a fundamental flaw in the common conception in the field of education as to what “nurturing children” actually means.  To my mind, nurturing means giving them nourishment:  something that infants use to help them grow and get stronger.  Not something that enslaves them, but something that liberates them to make their own decisions with confidence and allows them to feel the satisfaction of having made good ones.

Is this an image of Nurturing?  Sure…only for infants who can’t walk!

What about this one? Where are his parents?  Letting him explore on his own!

What about this one? Can’t he hold his own pencil?

Or to learn from bad ones.  Aye, there’s the rub.  In the English-speaking world, including Canada (where I live and work), England, Australia, and the United States, there has been a disturbing trend towards what I would call an hysterical over-protection of children that speaks volumes about our own psychoses as societies and says very little at all about learning to manage, or even to understand, risk.  There is a massive anxiety that has arisen in our very affluent societies that seems to be linked to the combined influences of a general lack of immediate consequences to our actions; of the rise of individualism (and the concurrent loss of community) in the West; of media and commercial interests (too often one and the same); of the fear of litigation; and of an overwhelming amount of what is falsely termed ‘choice’, which is linked to all of the previous items mentioned in this short list.  Over the next few postings, I’ll look at them all briefly, and offer my perspective on why risk, and its concomitant occasional failure, are not only tolerable, but necessary for proper healthy development. “

Leave a comment

Filed under Uncategorized