Ten Things Our Grandparents Got Right #7: …And Plenty of Sleep

In Macbeth, the guilt-ridden titular character of Shakespeare’s tragedy begins to hear auditory hallucinations telling him that he has lost the ability to sleep, and will, for the rest of the play, effectively have become an insomniac. Realising what he’s lost, Macbeth lamentingly lists sleep’s virtues:

Methought I heard a voice cry ‘Sleep no more!
Macbeth does murder sleep,’ the innocent sleep,
Sleep that knits up the ravell’d sleave of care,
The death of each day’s life, sore labour’s bath,
Balm of hurt minds, great nature’s second course,
Chief nourisher in life’s feast.
Macbeth (2.2.46-51)

He couldn’t have been more right. More and more studies are pointing to the desperate need our society has for more and better sleep, as well as to the drastic and rather frightening consequences of not getting it.

How much sleep should kids be getting? The National Sleep Foundation recommends the following:

That’s right; teenagers need more sleep than adults. Not less. But how many of you with teenaged children find that it’s you who are getting more sleep? Oh, sure, teenagers might sleep late on the weekends. But that’s part of the vicious cycle of adolescent sleep deprivation, which according to all kinds of sources, is approaching epidemic levels: here’s a graphic showing how it works.

Taking this one item at a time, it begins with later bedtimes for teenagers. This has many causes, including an apparent circadian “reset” that happens during adolescence. As is usual with such findings, however, it should be taken with a grain of salt, lest we fall prey to the “chicken and egg” fallacy: are the changes in the brains of teenagers the cause, or the result, of their changes in habits? Knowing what we know about neuroplasticity, it’s a question worth asking. More on that later.

Another reason teenagers are staying up later is the reason most of us these days are doing it: there is an unnerving loss of the conditions under which humans have traditionally slept: that is, night. We are losing true nighttime, as I realised in 2003, when I was in New York City during the great Blackout of August in that year. I spent a couple of days sleeping on the floor of the airport, and I was privileged to see what hardly anyone in my lifetime has seen: the Milky Way in the skies above Manhattan. Light pollution has turned nighttime, for the majority of us on the planet who now live in cities, into a kind of a dull glow, or a “luminous fog”, as Ian Cheney puts it in his documentary on the Loss of Night, titled The City Dark.

The body, simply put, needs darkness. The ebb and flow of sunlight in our normal ancestral days regulates melatonin production in the brain. Disrupting this process with light – particularly blue light – has a number of surprising adverse health effects, including cancer, depression, obesity, and heart disease. Yes, cancer – and it seems like light pollution might exacerbate air pollution, too, by killing air-cleansing agents that live only in the darkness! The American Medical Association (AMA) has recognised the important role light pollution plays in detrimental health effects on the population: the AMA, in their Action of the AMA House of Delegates 2012 Annual Meeting, “Recognizes that exposure to excessive light at night, including extended use of various electronic media, can disrupt sleep or exacerbate sleep disorders, especially in children and adolescents.” (My emphasis)
The presence of TV, computer, and now cell phone screens in bedrooms is a major deterrent to normal sleep patterns, as well. And, of course, there are social reasons for teens’ staying up later: they’re building and testing their autonomy away from parental influence.

Unfortunately, although teenagers might try to compensate by sleeping late on weekends, the body’s need for sleep functions something like a bank account: you can run a debt, and it’s cumulative. That means that if you accumulate ten hours’ worth of “sleep debt” one week, just returning to normal sleep levels the next week isn’t going to cut it: You need to regain those ten lost hours on top of it. And the body is an unforgiving creditor, as we shall see.

What are, then, the consequences of sleeplessness? It’s a lot worse than just grumpiness in the mornings, which most parents of teens have experience with. In fact, it’s really, really bad. And the worst part is, we’re not taking it seriously. So it appears that it’ll get worse before it improves. Here’s a graphic that outlines just some of the problems:

Let’s start with the physical consequences of sleep deprivation. As an ex-soldier, I’m familiar with quite a few of them personally, and I remember my acquaintance with them with a great deal of discomfort, even 15 years later. Sleep deprivation contributes to:

• Hypertension
• Heart disease
• Diabetes
• Obesity
• Reduced immunity
• Death

Yeah, death. Both because of the reduced immunity (hamsters kept awake died within 3 days), as well as over 100 000 car accidents a year caused by inattention or microsleeps behind the wheel. (Humans can’t stay awake indefinitely; unavoidable moments of sleep called ‘microsleeps’ happen involuntarily not long into prolonged sleep deprivation conditions.) Death by car accident remains the Number One killer of our young people in Canada and the U.S., although our complete surrender to a car culture blinds us to the sheer staggering numbers: In the U.S. in 2010, seven teenagers died every single day in car accidents. Add to that, the number of kids whose health is affected by obesity, as well as the second-biggest killer of teens – suicide, which is of course linked to depression (see below) – and you’ve accounted for the majority of teen deaths in North America, period. Taken together or apart, these are a clear and present danger to our young people.

Bad as those are, the cognitive consequences are just as harmful. Sleep deprivation causes or contributes to:

• Symptoms often mistaken for ADHD, an increasingly-diagnosed disorder in teens
• Reduction in ability to concentrate and pay attention
• Memory reduction and loss, as well as reduced verbal skills (as an English teacher, I see this all the time).  This also includes a heavy reliance on simple, clichéd phrases and a crippled capacity for creativity.  Uncommunicative, incoherent teens are a cliché themselves!
• Hallucinations (I vividly remember seeing Napoleon one night after many days in the field!)
• Impaired judgement, especially moral judgement:  it turns the world into a black-and-white affair
• Depression (also increasingly diagnosed in our teens)
• Reduced ability to cope with negative emotions (the famous ‘moody teen’ syndrome)
• Increased percentages of substance abuse
Impaired ability to judge and manage risk (which contributes to all sorts of behaviour that adults have blamed teens for in the past)

In our society, where teenagers and adults are generally, chronically, sleep-deprived, it bears thinking about the consequences. From my vantage point as an educator, I can tell two things:

1. I can’t educate sleep-deprived teens. Their brains are too severely compromised for real learning to take place.

2. This is bigger than it looks. Take a good look at the list of cognitive and physical consequences. Apart from hallucinations and death, how many so-called “attributes” of adolescence, as it has been defined by our society, are actually not essential to teens at all, but to sleep deprivation? How many of the current epidemics of ADHD, poor academic performance, depression, moodiness, substance abuse, car accidents, obesity, and poor judgement that we have traditionally looked at individually, might actually be the result, or at least exacerbated by, lack of decent sleep?

As I’ve mentioned in previous posts, teenagers in North America and the developed world are not typical of adolescents in either the rest of the world, or in the historical record. The danger of looking at our teens, who are extreme outliers in terms of behaviour worldwide, as representing some kind of biological or developmental “norm” is a big one. It causes us to look at behaviour which is actually by definition “unnatural”, judge it as ‘normal’, and then adjust our expectations based on a false norm. I see no reason in the historical record or in other cultures I’ve visited to believe that the intensely anti-social, attention-challenged, moody behaviour and crippled judgement capacity that most North Americans associate with the teenage years is biologically based at all. Looking at this evidence, I’m strongly leaning toward viewing all those symptoms as springing from (or at least connected to) a single cause – namely, unhealthy sleep patterns.

In a typically Western fashion, we have ascribed these behaviours to non-contextual categories, and decided that they represent the ‘nature’ of adolescence, rather than seeing them in a broader social, historical, and holistic matrix of contingencies. In a brilliant paper published in 2010, UBC professors Joseph Heinrich, Steven J. Heine, and Ara Norenzayan challenge the practice of making universal generalizations about psychological norms based on studies that are done almost exclusively on American undergraduate students. Heinrich et al. point out that of all the human populations in the world, these WEIRD (Western, Educated, Industrialised, Rich, and Developed) people are among the least likely to represent global norms. In fact, in areas as wide-ranging as spatial awareness, visual perception, moral reasoning, inferential reasoning, WEIRD people are “frequent outliers”, “particularly unusual compared to the rest of the species”. They go on to say in the abstract that “members of WEIRD societies, including young children, [my emphasis] are among the least representative populations one could find for generalizing about humans.” The full paper, available as a PDF here, is well worth reading.

We’re witnessing a number of crises in the mental and physical health of our children, but are we being blinded to a simple solution to nearly all of them simultaneously, because we’ve become so used to the symptoms of the epidemics of obesity, depression, suicide, and car accidents that we’ve started thinking they’re normal?

Leave a comment

Filed under education

Speculations on the Culture of Fear

 

I wondered aloud, in my last post, how we could have gotten to the point where, in direct contradiction of some very clear, very scary, not-at-all-obscure-or-complex evidence of the consequences of our actions, we have become so terrified of risk that we are actually killing our kids, after making them miserable. All in the name of love.

This is not an easy question to answer. Certainly, it won’t help to oversimplify, as Tim Gill points out in this little diagram, from his website ‘Rethinking Childhood’:

 

 

 

This is a tempting shorthand for what’s been going on in kids’ lives and in the minds of parents, culminating in what I believe is a real crisis in kids’ physical and mental health. But a more thoughtful approach looks for the roots of problems, and is not distracted or satisfied by proximal causes. Here’s Gill’s proposed ‘rethinking’ of the problem:

Of course, as he admits, there’s more to it even than that. Where do all these gadgets come from? Whence all the traffic? Why are parents working such long hours? And are these fears, in fact, well grounded? These questions need answers, and I’ll try to provide a few, in a minute.

But in the meantime, parental anxiety has been identified over and over again as the most proximal cause of the inactivity of our kids. Why has parental worry seemingly exploded since the days of our pragmatic, capable, depression-era parents or grandparents? A lot of the answers to this question actually intersect significantly with the answers to Tim Gill’s questions, above.

Margaret K. Nelson, author of ‘Parenting Out Of Control’, looks at the phenomenon of parental anxiety and its consequences, both to anxious parents themselves, and of course to their offspring. In an impressive two paragraphs near the beginning of the book, she summarises some if the prevailing theories for its genesis, and then provides additional research of her own, which expands on those theories. Briefly (with shorthand names I’ve given them), some of them are:

The “Culture of Fear” argument: Due to media exaggerations and obsessions with violence, terrorism, and sexual predation, parents are hyper-aware of potential dangers that lurk in what they perceive as an increasingly violent, risky world.

The “Only Child” argument: Because (partly due to increasing urbanisation), parents are frequently having only one child, they will never have the perspective that comes with experience, and remain anxious “new parents” the whole time they raise their first – and only- child. This is so common that in families with multiple children, the eldest child often ends up bearing some resentment to his younger siblings, who because their parents were more relaxed and experienced by the time they had them, grow up with fewer anxiety-induced restrictions on their activities, and often receive privileges at earlier ages than the first-born.

The “Little Emperor” argument: Related to the Only Child, this argument suggests that parents’ adulation of, and anxiety for, their offspring is exaggerated to unhealthy levels because of the uniqueness of a child without siblings. My name for the argument is taken from the “one child” policy of China, which has been popularly blamed for creating a generation of ‘little emperors’ – spoiled children who are treasured by their parents because the state forbids their having siblings.

The “Erosion of Adult Solidarity” argument:  Suggests that, as our society becomes more and more individualistic, we have transferred the burden of rearing a child to the sphere of the family, or in many cases, single individuals, whereas in more cohesive cultures around the world, this job is seen as the collective responsibility of the whole community.
The “Risk Society” arguments: (Broken down into sub-categories):

The “Amnesiac argument”: Anthony Giddens and others suggest that the erosion of any strong cultural or historical link to the past creates an overemphasis on, and anxiety for, the future, including notions of safety.

The “Master of My Fate” argument: As danger is redefined, from “fate”, or “chance”, to “manageable risk”, the emphasis is placed squarely on personal responsibility. The idea that the natural world is complex and fundamentally unmanageable is replaced with legal notions of ‘due diligence’ that retroactively assign blame whenever an event takes place that is deemed, in retrospect, to have been avoidable.

The “No Social Net” argument: As governments increasingly retreat from what many see as their fundamental duty to provide for their citizens, more and more responsibility for the health and safety of their kids are placed on individual parents.

To these, Nelson, based on her research, adds some nuance in the form of:

The “Social Class” argument:   Having noticed significant differences in the ways intensive parenting manifests itself between the lower- and “professional-middle” classes, Nelson suggests that ideas about the future financial security of their offspring motivates parents of different classes to ‘helicopter’ in different ways. What is constant, though, is the basic assumption of an uncertain economic future, which the Boomer generation did not share, and the desire of parents to see their kids replicate or exceed their own social class, which is no longer seen as being guaranteed.

But there’s more! Lenore Skenazy, in her book ‘Free Range Kids’, suggests a number of other reasons, namely:

The “Opportunistic Vendor” argument: Recognising the immense opportunity for lucrative businesses that pander to the health- and security-obsessed, whole markets spring up that attempt to sell ‘solutions’ to so-called problems that would have been laughable even a generation ago (baby knee-pads, anyone? How about tracking devices for your teenagers? You get the idea.)

The “Know-it-all Expert” argument: Related to the one above, this argument questions the rise of the “one size fits all” brand of so-called “Parenting Experts”, whose primary function seems to be to sell books and magazines telling parents what they are doing wrong and how it will permanently damage their children. They make their money by claiming that there is a ‘right’ way to raise a child, and that only they have the secret – which they will impart to you for a price!

The “Social Pressure” argument: Caught in a media firestorm when she allowed her son to take the NYC subway on his own, and dubbed “America’s Worst Mom”, Skenazy certainly felt the pressure to conform to the new social norms. Luckily, she educated herself about the origins and viability of those norms, and stood her ground. Many parents succumb. My own sister was upbraided by a stranger in a car-park outside a bank, where she had briefly left her 10-year-old daughter in charge of her toddler-aged brother in order to run in to make a deposit. She wasn’t gone more than a few minutes, and though the day was warm, my niece was perfectly capable of opening a window at age ten! The stranger actually called the police, apparently having read in the media one of the many stories of tragedy involving infants or dogs left in hot cars, and being unable to make the distinction in context. Though my sister is still adamant that she did nothing wrong, the experience was unpleasant enough that she has never repeated it. The intense pressure that mothers face from Nosey Parkers and busybodies is real – nobody wants to be called a Bad Parent. Especially when – as is increasingly the case – being branded such is likely to bring you under the cruel scrutiny of the law – which brings me to the next point, viz:

The “Legal Pressure” argument: Laws are often reflexions of social norms, and when those social norms become bugshit crazy, so often do the laws. The ‘Free Range Kids’ blog is full of anecdotes about draconian, blinkered applications of stupid laws that have profound negative effects on the lives of parents who are trying to buck the trends and raise their kids as sanely as they know how. One Florida lawyer actually presents convincing arguments that many parents unjustly accused of negligence in the U.S. actually cannot even get fair trials anymore, because the public dialogue has been so severely compromised on the subject of child safety that jury members and even judges cannot make rational decisions on the subject in this culture of fear.

The “Lousy Judge” argument: Our brains, as Skenazy and others like Dan Gardner, point out, are just phenomenally, evolutionarily predisposed to stupidity when it comes to risk assessment. Without education, our brains get the numbers wrong every time. Of course, not having sensibly-presented data from the media doesn’t help. See my previous post for a deeper look at this one.

The “Cultural Shut-Ins” argument: When our lack of interest in the past, combined with a cultural insularity, give us little knowledge of how other cultures (including our own, in the past) have treated issues of child-rearing, the echo-chamber of our own modern-western-culture-specific worries grows louder and louder, with no parallel experiences to contrast or challenge them. North Americans’ fabled lack of worldliness and knowledge of history combine to make a massive handicap here, aside from just making us insufferable to people of other nations.

To these, I myself might also add two psychological arguments that, while they might not directly cause anxiety, certainly help to explain why it might be augmented under certain circumstances:

The “Self-Efficacy” argument: Related to, but distinct from, self-esteem, self-efficacy is the increased sense of personal confidence and ability to deal with difficult things that comes with…well, doing difficult things. It’s a sense of competence that comes with skill, which in turn comes with experience. The safer we become, the fewer difficulties we encounter, which means that the self-efficacy ‘muscle’ becomes atrophied, and we lose perspective about what constitutes real danger, as well as our ability to cope with simple inconveniences.

• Related to the self-efficacy argument, the “Crooked Barometer” argument is a psychological argument that suggests that when a high level of risk is either eliminated or otherwise subverted, as in our modern ultra-safe society, the brain has a way of “advancing the queue” of smaller anxieties to fill the space left by the genuine threat, making small worries seem comparatively larger. It promotes, in other words, molehills to the rank of mountains, but only in the absence of real mountains, which would provide perspective.

Have I missed anything? 🙂

Of course, most of these factors are linked to each other, and reinforce each other, making it more and more difficult to have a coherent, calm conversation on the subject at all. But I’d like to try to construct a narrative out of these seemingly disparate proximal causes, in the hopes of stumbling onto something closer to the root of all of them. Here’s the (probably too graphically challenging) flowchart I came up with based on the factors listed above:

I’m kind of impressed with how central the rise of corporate capitalism is in all this, as well as the brand of urbanisation that it encourages. The media, while extremely influential, is mostly just reacting to market forces when it fearmongers to the extent that it does, as well as reflecting and augmenting the elevated levels of societal fear. Dan Gardner refers to this as an “echo chamber” effect.  I don’t want either to “let the graphic speak for itself” when it’s so obviously complex; nor do I want to belabour a point here.  So perhaps I’ll try to articulate the narrative that this graphic suggests to me in a future post.

2 Comments

Filed under Uncategorized

Ten Things Our Grandparents Got Right #6: Fresh Air and Exercise

There’s an amusing anecdote in The English Gentleman, a humourous exposé of English upper-class life by Douglas Sutherland. In it, the author instructs us on the various ways one might master gentlemanly behaviour, from the dining hall to the hunting field:

He must also realise that once he is in the saddle he must be as rude as possible to anyone who crosses his path. One quasi-gentleman, when he was asked by the Master what the devil he thought he was doing out hunting, was naïve enough to reply that he only came out for the fresh air and exercise. ‘In that case you had better go home and bugger yourself with a pair of bellows,’ thundered the Master, riding off in pursuit of another victim for his scorn (91).

I mention this for two reasons: one is that in 1978, when the book was published, it was a natural response to the question of what one is doing outside: fresh air and exercise. The fact that the ‘quasi-gentleman’ answered without thinking allowed the ‘Master’ to take advantage of the idiom, which I (growing up in the 1970s) heard all the time from my mother as she closed the screen door behind me and sent me out into the world. I was not to come back until lunch time, after which I would be out again until dark. This seems, sadly, no longer to be the case for our young people .

The second is that we have, if you will pardon the expression, well and truly buggered ourselves on this front. Somehow, my generation, ignoring our own childhood experiences of endless summer days, independent adventure, creativity, and blissful activity outdoors for its own sake, have transformed the notion of ‘outside’ from “the natural and salubrious habitat of a child” to “the weird and unnatural habitat of paedophiles and/or early death”. In a single generation, it seems that the natural “roaming range” of a child at play has declined to one-ninth its former area! Have a look at this unsettling picture, from Britain’s Daily Mail:

This trend seems to be accelerating: the percentage of Canadian kids who play outside after school has dropped 14% over the last decade alone, and 46% of Canadian kids now get 3 hours or less of active play per week, including weekends.

The consequences of this sudden trend to raise children in captivity to children’s mental  and physical health have been enormous and well documented. For complex social reasons, children are now subject to restrictions on their movements and activities that outnumber those of incarcerated felons. In one of the worst examples of Newspeak that I have witnessed, we justify our shameful treatment of young people by claiming that it is in their best interest.

Many go so far as to blame children for being lazy: the typical response of colonisers to the colonised. Create conditions that are so unhealthy and oppressive that, in order to survive, the victims adapt and change their behaviour. Then blame them for that behaviour and use it to justify further control and oppression.

Kids are not naturally lazy. In a recent global study, playing outside with friends was the single most popular choice of activity for children around the world. They don’t want to just sit quietly, allaying their parents’ worries: depression rates in children are skyrocketing. They’re fatter, sicker, stiller and sadder than any kid in history. In Canada, 92% of kids say they would rather play outside than watch TV. So why, why, why are they spending on average almost 8 hours a day in front of screens – as much time as you or I might spend at a full-time job??

The answer: Parental anxiety. Parents’ fears for children’s safety have turned them into (in the words of one child welfare spokesperson in the U.K.) “Battery chickens”. These parental fears are almost totally uncalled for. As experts have repeatedly pointed out, contrary to media-distorted perception, the world is not becoming more dangerous. Crime levels are at or below the idyllic levels of Baby-Boomer childhood days. Violent crime is especially low, as is death from disease or accident. We are the safest people ever to walk the planet, as Stephen Pinker points out in this TED talk:

In fact, we’re so safe it’s becoming dangerous. Everyone has seen for themselves that “kids are getting fatter these days”. But that seems sort of benign, next to the horrific fears of pedophile abduction that popular culture forces on the imaginations of parents everywhere. So let’s look at the consequences of sedating our children.

PHYSICAL CONSEQUENCES

In short, inactivity is killing our children slowly. The New England Journal of Medicine reports that, for the first time in history, our children’s lifespan could be 2-5 years less than our own.  This is despite the fact of our increasing safety, mentioned above, and despite the fact that this appears to be entirely avoidable: fit individuals outlive unfit individuals across the board, including all causes of death. Regular physical activity is associated with as much as a 30% reduction in all causes of mortality.

Some fun facts from the ParticipACTION site (remember them? I used to have a jacket sewn all over with medals of theirs…one of my proudest moments as a kid was earning a gold one year):

• The number of obese children has tripled in the last 3 decades.

• That means that 26% of our kids are overweight or obese. That’s 1 in 4. By the time they reach adulthood, that same percentage will be fully obese.

• Sport participation rates in Canadian youth aged 15-18 declined from 77% in 1992 to 59% in 2005. Adults continue the trend: Canadian adult participation in sport declined from 45% in 1992 to 28% in 2005.

• What is killing our kids? Car accidents, mostly. But if they make it to adulthood, it’s heart disease, possibly cancer. Inactivity contributes strongly to more than 25 chronic conditions, many of which are potentially fatal: coronary heart disease, stroke, hypertension, breast cancer, colon cancer, type 2 diabetes and osteoporosis.

• The link is strong, and causal: inactivity doesn’t just exacerbate illnesses – it causes them. Physical inactivity is estimated to cause 21-25% of breast cancers and colon cancers, 27% of diabetes and 30% of ischemic heart disease.

• Physical inactivity – let’s be clear – is deadly. It is one of the five leading global risk factors for mortality and is estimated to cause 2 million deaths per year. Now a paper in the prestigious medical journal the Lancet is calling for exercise to be listed as a vital sign, alongside pulse and respiratory rate.

• For the first time in history, obesity is responsible for more deaths than being underweight, worldwide.

• This is, by all normal definitions, an epidemic. And a big one. In lower-income countries, it is comparable to two of the biggest, scariest health scourges on Earth: HIV/AIDS and tuberculosis. And yet, we’re apparently doing nothing about it. This is despite the fact that, repeatedly, Health Care is reported to be the #1 topic of concern to Canadians.

Even if you define ‘concern’ as merely referring to the cost associated with it, we’re being obtuse. The estimated costs of obesity and physical inactivity are as high as $7.1 billion a year, as of 2008. Add in the costs associated with reduced productivity, and you’ve got what should be a massive financial incentive to get our kids outside.

MENTAL CONSEQUENCES

Children are experiencing depression and anxiety at earlier ages than ever before. Now, Statistics Canada has found that 6.5% of youth and young adults between 15 and 24 had major depression last year. That’s more than 250, 000 kids. I recently attended a mental health seminar at the school I work at in Ontario. We were told that, out of a population of 1500, four students know a peer who has committed suicide in the last year. In terms of the whole city, in a population of around 22 500 high school students, 48.7% of male students and 47.9% of females had experienced depression during their high school careers; 45% of the males had never admitted it to anyone, suggesting that the alarming rates of teen depression in the news are actually underreported.

I can’t help but feel that the answer to these problems is the same one my mum had, all those years: Fresh air and exercise. And there’s a growing body of research to back her up. Let’s break it down:

FRESH AIR

Defined as ‘being outside in nature’. Parental fear of strangers and traffic have contributed to the decline of unstructured, unsupervised, outdoor play. 71 % of today’s mothers said they recalled playing outdoors every day as children, but only 26 % of them allow their kids to play outdoors daily. Most Canadians (75%) got their primary opportunities to experience the outdoors through school programmes, many of which are now being cut. Even recess has become a thing of the past in many schools in North America.

So few kids get out into nature these days that Richard Louv has coined the term “Nature Deficit Disorder” to describe the array of symptoms that could be allayed or eliminated with more contact with the natural world. These include:

• ADHD: even marginal exposure to nature alleviates many symptoms

• Vitamin D deficiency

• Myopia:  do more kids wear glasses these days, or is it me?

• Asthma

• Stress, anxiety, and depression

Check out this infographic, from the David Suzuki Foundation:

These are over and above the benefits of just plain ol’ exercise, and the reduction of obesity-related morbidity, mentioned above. In fact, exercise in a natural setting seems to have more benefits than exercise alone – even the sight of green spaces through a hospital window has been linked to faster recovery times from surgery!  The Proceedings of the National Academy of Science posits a direct relationship between biodiversity and human mental health.  Even five minutes of exposure to nature has significant positive mental/emotional impact.

EXERCISE

The benefits are huge, and the drawbacks are nonexistent. Why do we not just do this? Remember, kids want exercise. All we have to do is get out of their way. Seriously: that might be it. We might be able to cure one of the biggest epidemic health threats to our children just by calming the hell down and stepping back. According to Dr W.H. Dietz,

Opportunities for spontaneous play may be the only requirement that young children need to increase their physical activity. Reducing the amount of time that children are allowed to watch television is one strategy that offers children opportunities for activity, and it is likely to alter requests for advertised foods as well.

The physical advantages of exercise should be obvious; what might not be so blatant are the psychological, neurological, and cognitive benefits it confers. I previously reported on an open letter by hundreds of mental health professionals across the world, calling on the return of unstructured, unsupervised play as a potential cure for many of the psychological woes suffered by our children these days.

But there’s more: lots more.

One of the ways that parents interfere with children’s fresh air and exercise is an inappropriate, status-driven obsession with academic performance.  As is so often the case with these things, the obsessive behaviour actually brings about the very outcome they are trying to avoid. Regular exercise in fact increases attention, focus, memory, critical thinking, and overall cognitive ability.

Further to the treatment of ADD and ADHD by exposure to nature, discussed above, exercise has also been shown to have a positive effect .

It has been shown to increase brain function and (what is critical) plasticity, allowing for amazing advantages ranging from recovery from brain injury to increased ability to learn to a reduced risk of dementia.

Aerobic exercise has even been linked to neurogenesis: that is, it triggers the growth of new brain cells, something that people used to think was impossible. This was shown to have an effect even in the brains of depressed people, where it is normally reduced. The reduction of cortisol, a stress hormone, may be implicated in this.

It has also been shown to increase blood flow to the brain, which has an overall, generalised beneficial effect on executive cognitive function. Study subjects showed marked improvement in areas such as “tasks that require planning, working memory, multitasking, [and] resistance to distraction.” Mental exercises, by contrast, tend to be task-specific in the way they improve cognition.

Exercise prevents memory loss by reducing feelings of stress, anxiety and depression. It also has a positive effect on sleep patterns and insomnia in adolescents, which is at the root of all kinds of health and cognitive detriments.

After a study by a Harvard medical team, even our very own Canadian national emblem, the Mounties of the RCMP, have adopted a fitness programme for their officers – not to keep them fit so they can chase criminals, but specifically for the improvement of their cognitive functions. They’ve invested in fitness programmes for their inspectors to help them solve crimes.

So with all these common-sense, obvious, well-documented benefits of fresh air and exercise, how have we allowed our own groundless anxieties to rob our children of the world of nature, and possibly of their very future? I’ll explore some of that in my next post.

Leave a comment

Filed under education

Student Protests: Part 3

Finally, I’d like to address another facet of the reaction to (against) the Québec student protests:  Childism.

It is a remarkable fact of life in modern North America that there exists in nearly every part of society a profound contempt for, and both fear and hatred of, children.  This is a provocative statement, but I believe that the remarkable nature of prejudice against children  in North America has a lot to do with the fact that so few people find it remarkable at all, in the sense that nobody remarks on it.  In fact, I think most people would reject the idea, and feel genuinely shocked at the suggestion.   But let’s look at the question objectively, if we can.  This is difficult, since (as I posted elsewhere), one of the factors that interferes cognitively with objectivity most powerfully is the mention of children.    But we’ll try.

One of the indicators that tell me that we deal with a prejudice is when someone uses the name of an identifiable group as a common pejorative.  In many of the comments in English media concerning the student protests in Québec, the term ‘childish’ comes up.  Imagine, for an instant, replacing the negative description of a protest for the rights of the disenfranchised with another group’s name.  Imagine if the protests were being compared to, say, women.  “Those protesters are just being womanly.”   Or an ethnic group.  “Don’t those protesters see how African-American they’re being?”  Or a religion.  “Why can’t they just stop being such Muslims about all this?”  We’re shocked by those hypothetical statements (which only a generation ago wouldn’t have been hypothetical at all), but we take the ‘childish’ moniker in stride.  After all, there are no children who object publicly to the term, so we can safely ignore them.

The assumption behind such name-calling is that children are selfish, Hobbes-ian creatures of pure ego, irrational and impulsive, incapable of reasoning or debate, or even of having real reasons for doing what they do.  They are mindless.  They act merely randomly out of intense self-interest, sucking resources from society and giving nothing in return.  Their value, if they have any, is entirely passive, not active.  They provide us with joy because we look at them and think they’re cute; they’re there for our aesthetic appreciation, like possessions or pets.  Such a creature ought surely to be sequestered from society, have their rights limited or annulled, and in the best Victorian manner, speak only when spoken to, for the good of the social order.

Does this sound like your child?  Of course not.  Yours is the exception.  Thanks to the cognitive biases known as the  Self-Serving Bias , the Introspection Illusion, and the Attribution Bias, among others, we are quite willing to believe that our own (or our close peers’) motivations for particular beliefs or actions are rational, while those of our neighbours are emotional or irrational.  This is true even when the action or belief is identical.  A recent study found that people’s own belief in God, for example, was explained in purely logical, rational terms, while others’ belief in the same God was attributed to irrational motivations like upbringing, tradition, etc.  In my own classroom, I do a twice-annual survey that asks students in their senior year of high school about their own motivations, as well as those of their peers.  The numbers are too small to be scientific, but they pretty much exactly follow the  trends mentioned in Michael Schermer’s article, in the link above.  My students consistently claim (on anonymous surveys) that their peers are more biased than they are — and the fact that they are told that they are taking a survey on universal cognitive biases does not seem to influence their choices, or make them think about the possibility of their own prejudices at all!

Think for a moment about driving on the highway.  Three situations present themselves:

A)  You’re driving a little below the speed limit, enjoying your day, and someone in a faster car hugs your butt for a click or two, then zips past you at higher speed.  This person is a jerk, you reason;what’s he trying to prove?  I’m already going 90; is his manhood threatened or something?  Why is everything such an emergency all the time for some people?

B)  You’re driving a little above the speed limit, since you have to get somewhere in a hurry;  and  you come up behind someone going slower – maybe even a little under the limit.  Come on, you think; let’s go!  Can’t you at least go 100?  Why does everyone have to be such sightseers?

C)  You’re driving  the speed limit, and there’s a car directly beside you, in the next lane, pacing you the whole time.   Geez,  you think, what is this creep’s problem?  What happened to personal space?  Does he have to take the speed limit so literally?

The interesting thing about these scenarios  is that in every case, the other person is assumed to be irrational, whereas you have good reasons for being a little fast or a little slow on the highway.  Those reasons are not typically attributed to other drivers; they do what they do because they are irrational — whereas you do the exact same thing for justifiable reasons.  The other interesting thing is that there is no winning situation for the other driver:  he has only three options:  he can go faster than you, slower than you, or the same speed as you.  In every one of those cases, he’s a jerk — not so easy, being someone other than you!!   🙂

We all do this.  If you’re blushing right now, recognising your own behaviour, you’re just like the rest of us.  The shocking part isn’t so much that we do this, but that we’re all so blind to it until it’s pointed out to us.  That’s how prejudices work.  Part of the self-serving bias tells us that, since we’re essentially good people, the things we do are essentially good, too.  It is extremely difficult, without psychological damage, to perceive of ourselves as people who are not fundamentally good.  This makes our prejudices hard to catch, hard to own up to, and hard to change.  Ever have the kind of habit that people have to point out to you, or else you don’t even know you’re doing it?  Chewing your nails, for example?  Much of our own behaviour, even if it runs directly contrary to our conscious wishes and values, remains invisible to us.  And so it is with attitudes.

If you were to ask someone, “If you were likely to take to the streets and protest against some government policy that directly affected you, what might be some of the reasons for doing so?”, they would likely be able to come up with several possibilities without much effort, all very noble and reasonable.  But when presented with masses of students protesting something that they have not taken the time to understand, they are willing to assume that it is being done for irrational, selfish reasons, or even for reasons that are ‘essential’ to the group in question.  “That’s just what they do,” they’ll tell you, referring to protesting students.  “It’s in their nature.  They hardly even understand the issues; they just like to protest.  They’re just naturally contrary, I guess…maybe it’s hormones.  It’s childish, really.”

These same arguments were used, one need hardly remind anyone, against Abolitionists and Suffragettes, in times  past.

BONUS FUN THOUGHT EXPERIMENT #1:  Next time you are engaged in a disagreement with an adult woman, suggest to her that her inability to understand that your stance is the correct one is the result of hormones, perhaps because of her period.  Then ask her why she’s so irrationally upset all of a sudden.

Economic ‘reality’ is frequently cited as a reason not to listen to the ideas or concerns of the under-classes; and it bears thinking about how much our economic ‘realities’ are predicated on the exploitation of under-classes, if every time they insist on fair or equal treatment it is seen as such a threat to our own material comfort.  This stares us in the face every day, but we can’t seem to see ourselves in that light.  The United States grew rich on the backs of African slaves, and their descendants were (and still are) discriminated against, from Jim Crow to the modern prison system.  But the majority of people still see this as somehow part of the “natural order” or the “economic reality”, rather than the result of history and of choices made by those in power, from slaveowners to current governments.

BONUS FUN THOUGHT EXPERIMENT #2:  Ask someone if free tuition nationwide would cripple the economy, or if honouring Native land claims would (substitute any enfranchisement of any minority group, really).  If they say ‘yes’, then point out to them that we’ve just discovered that therefore, the economy not only tolerates, but actually relies on the disenfranchisement of minorities.  Watch them squirm.

The comparison of children’s rights to slavery is not hyperbole, nor a random choice on my part.  Universal human rights are either a universal concept, or else just another way to exploit people; that’s clear:  if only certain people get rights, it’s not ‘rights’, but ‘privilege’, and a profound and ugly hypocrisy, using noble language for selfish purposes.  Interestingly, on the subject of rights, the United Nations has a useful document detailing the universal rights of children everywhere:  it is telling that alone on the planet, only Somalia (which has not had an effective government since the 1980s) and the United States have refused to sign it.  Before we in Canada get too smug, though, it bears mentioning that we have been cited by the U.N. for our lack of enforcement of the document that we are signatories to, true to Canadian form, it seems.

The use of the word ‘childish’ as a pejorative really gets to the root of how we see children in Canada and much of the world.  Elisabeth Young-Bruehl, in her posthumous work, Childism: Confronting Prejudice Against Children, coins a term and  exposes something that most people in our society would never admit exists, let alone admit to participation in it.  A scholar of Prejudice Studies, Young-Bruehl explains that all prejudices share certain traits in common.  At their core, they represent a kind of psychological defence mechanism similar to projection, and known in some academic circles as the process of ‘othering’.  Prejudice arises against a “target group…one whose members share characteristics and conditions that those prejudiced against them seize on and distort for their own purposes” (p. 19).   In the case of children, these characteristics include dependency, incapacity, irrationality, and selfishness.  Rather than dealing with those traits in our own lives, they are projected onto children, and our attitudes towards them are modified in order to strengthen that notion of them being ‘different’ from us.  This can take many forms, from simple condescension to overprotection to outright abuse.  In fact, Young-Bruehl makes the point that childism might be at the root of many other forms of prejudice.  I am inclined to agree, seeing how often the tactic of infantilisation is used against minority opinions.  Women were kept in line in previous generations through a kind of semi-benign imposed childishness:  they were kept from true agency in their own society under the self-fulfilling prophecy that they were incapable of mature, rational participation.  The same ‘childish’ argument was used historically against Africans:  both individually and as a class.  Africa as a whole was seen as a kind of kid brother to Europe, needing colonial control to keep it out of trouble.  And individual Africans were seen as impulsive, irrational, and potentially dangerous on some sort of OEdipal level.

The comments I have read on news websites advocating for the imposition of draconian force against the ‘spoiled brat’ students appal me, but I am reminded that in popular culture, the only class of person that it is still somewhat permissible to talk about beating in order to force compliance in them  is children.  “Spare the rod and spoil the child”, the saying goes, and I would urge those of you who assume that this is true to substitute another class of human being in that proverb:  “Spare the rod and spoil the woman”, for example.  Or what about disabled adults, based on their supposed developmental similarity to children?  Obviously ugly.  But with children, if we’re not actively spanking our own, we tacitly mostly agree with the right of parents to strike theirs, in the privacy of their own home.   72% of Canadians believe that spanking should remain a legal option for Canadian parents. This includes 57% of parents who say they never spank their children.   More than 75% of Canadian adults report having been spanked as children.  The right to strike children is enshrined in law.  Any objective view of our attitudes and preoccupations with controlling our own children would have to conclude that it borders on the obsessive.  Robert Epstein reports that the restrictions on adolescent behaviour in the U.S. match or exceed even those of convicted felons and members of the Marine Corps!   Just saying that we’re doing it “for their own good” doesn’t excuse it, either, especially if the evidence does not support that statement.   With the confinement and disempowerment of children at an all-time high, all in the name of their own good, it’s hardly a wonder that physical and mental health is at serious risk among the youth population.  This is clear evidence to me that we are not only discriminating against our children, but that our discrimination is of a nature that does fundamental and real harm to them.  The fact that we claim we’re doing it for good reasons does not pass the sniff test, sadly, any more than did Apartheid or Jim Crow.

The people participating in these protests are not children.  They are not infants, and they are not any more irrational than you or I.  Their reasons for protesting are specific to their own situation, obviously, but that does not mean that they are wrong.   People who make this ‘argument’ (I can hardly call it that) are simply making an ad-hominem attack. They are unable to listen to the reasons why the students are doing what they are doing, or to make the effort to understand. Instead, they attack something personal and unchangeable about them: their age. They make the assumption that because they are ‘different’ , i.e., younger, they possess characteristics that make them impossible to be right, or taken seriously.  And in doing so, they reveal the truth about how we see children in general in our society.

Leave a comment

Filed under education, Quebec protests

Student Protests: Part 2

I’d like to examine some of the assumptions that are made when one looks at the student protests in Québec through an inappropriate cultural lens.  I’ll try to take them one at a time.

1.  Student “Selfishness”

A common meme in English media and culture is that the students are simply being selfish, and unwilling to pay their ‘fair share’ of public education.  In English Canada, as I mentioned in my last post, the understanding of education past secondary school has become a rather corporate experience, paid for by individual consumers at great (and increasing) personal cost.  It is largely seen as a means to an economic end, and thousands of students in Ontario and elsewhere end up in debt to the University system out of a feeling of necessity, thinking that a degree is a laisse-passer to get a job.  The education is not usually an end in itself, and the crippling debt that one incurs in this exercise in credentialism is merely seen as the price one pays for the degree itself.  An extremely capitalist attitude, in other words, and one fraught with misconception, as James Côté and others have pointed out.

Under such a cultural understanding of education, the education of an individual student is just that:  an opportunity for the personal economic advancement of a single person.  As such, it is thought, it should be paid for by that individual, and not out of the common purse.  Many commentators have forwarded the opinion that “I paid through the nose for my education; why should the students in Québec get a free ride at my expense?”  This completely ignores the utterly different notion of the value of education in Québec.  In that province, education for the masses, and full participation in industrialised society and economies, was not always a given.  They were fought for.  Taxes in Québec (personal taxes, that is; corporate taxes in the province are astonishingly low) are among the highest in all of North America as a result of the decision that education (and childcare, e.g.) benefit society as a whole, and not just individuals.  Like health care, it is believed that everyone should be able to access it, regardless of income, and to be denied it for any reason is a violation of basic rights.  We (mostly) subscribe to the argument when it comes to health care in English Canada, but for some reason not in regards to education past high school.  The fallacious and extremely conservative claim that education is ‘for’ financial advancement certainly adds to this blind spot; under that model, after all, competition should be the dominant model, not cooperation.  More on that some other time.

So let’s look at the claim that students are being ‘selfish’.

Actually, by any reasonable definition of the term, students are not being selfish. The current student population will not be seriously affected by the hikes. The students who will be affected are currently not yet in high school. Ergo, the students are taking a principled stand for the future of people who are not them. They’re doing this for their future society, not for themselves. In fact, they took a risk with their own academic year, and the money they spent on it.   Hmmmm…..sounds like the OPPOSITE of selfish to me!   In fact, though the Charest government offered them the opportunity to be selfish, they did not take it.  The government offered to delay the implementation of the tuition hikes beyond the period when the current students would be at all affected. (They also simultaneously upped the fee hikes to something like 85% for that future generation of students.)  This offer was rejected on principle.

So, despite repeated demonstrations and statements by students that their protests are notselfishly motivated, that remains the dominant meme in the English media.  As to the claim, resulting from this fundamental misunderstanding of the principles involved, that “These students pay less than I did for university; therefore they have no right to speak,” leaving aside for the moment my last point, which was that their cultural and historical context is utterly different from other Canadians’, AND leaving aside the fundamental error of fact that such a statement comes from, let me address the logic of that ‘argument’.

A major flaw is the fact that it represents a race to the bottom.  If your own circumstances are bad, but you can find someone somewhere whose situation is worse, it is not a strong argument to say either a) that your circumstances are therefore comparatively fine, and you have no right to try to better those circumstances, or b) that everybody’s circumstances should be as bad as that other person’s.  Sour grapes, however, are a powerful tool of division.

Let’s say that Person A pays $5 for an apple. Person B goes elsewhere and pays $10 for the same fruit. Person B finds out about person A’s good luck, and instead of thinking, “Man, I got scammed…I’ll take it up with that unethical apple seller”, he thinks, “That stupid person A!! I’ll make his life miserable. How dare he get a better deal than me?” And then he goes to person A’s apple seller, and forces him to raise his prices to $10 as well. Result: everybody loses, except the apple merchants. How can we do this to each other? Is the Canadian ideal to just drag everybody down to the lowest, worst  level possible ?

The illogic of that position should be obvious, and the only question remains, “Why would anyone think like that?”  The question cui bono? (“who benefits?”) is a useful one.  While we bicker amongst ourselves about $5, the CEOs of Apples, Inc. make $5M bonuses. Divide and conquer.  This, by the way, is a staple of colonial education; if the colonised are fighting each other, they can hardly spare the attention it would take to fight their real oppressors.  As Ngũgĩ wa Thiong’o, a Kenyan Nobel-nominated author and political writer, says in Decolonising the Mind, 

[The colonisation of Kenya]  was effected through the sword and the bullet. But the night of the sword and the bullet was followed by the morning of the chalk and the blackboard. The physical violence of the battlefield was followed by the psychological violence of the classroom.  

[…] Thus one of the most humiliating experiences was to be caught speaking Gikuyu in the vicinity of the school. The culprit was given corporal punishment – three to five strokes of the cane on bare buttocks – or was made to carry a metal plate around the neck with inscriptions such as I AM STUPID or I AM A DONKEY. Sometimes the culprits were fined money that could hardly afford. And how did the teachers catch the culprits? A button was initially given to one pupil who was supposed to hand it over to whoever was caught speaking his mother tongue. Whoever had the button at the end of the day would sing who had given it to him and the ensuing process would bring out all the culprits of the day. Thus children were turned into witch-hunters and in the process were taught the lucrative value of being a traitor to one’s immediate community.

 […]The attitude to English was the exact opposite: any achievement in spoken or written English was highly rewarded. [In the colonial education system, which advanced by qualifying exams,] nobody could pass the exam who failed the English language paper no matter how brilliantly he had done in the other subjects. [. . .] English was the official vehicle and the magic formula to colonial elitism.

Who benefits from this division?  Those who divide, obviously.  Those who wish to undo and negate the advancements made during the Quiet Revolution.  Luckily, just because I already got scammed and paid for my $10 apple, that doesn’t mean my kids have to suffer — by the time they get to the apple cart, it’ll be $30 an apple!  I see it (as the students in Québec see it) as my duty not to let that happen, even if I am not myself going to benefit proximally from low tuition costs.

While we’re on the subject, I might mention the vast social benefits that come from a more educated population.  Contrary to the capitalist, consumer-model, where the only beneficiaries of education are the students themselves, on an economic level, everyone benefits from high levels of good education.

Here are just a few of the big points from a study on the fiscal investment returns of education:

•Parents’ education has strong effects on children. Thus the benefits of higher education accrue over extended periods.
•Higher parental education is associated with greater family investments in children in the form of parental time and expenditures on children.
•Children of more educated parents generally perform better in school and in the labour market, and have better health. A substantial amount of research concludes that education has a causal impact on health.
•Higher parental education is also associated with lower criminal propensities in children, and less child abuse and neglect.  Lochner and Moretti(2004) calculate that raising the high school graduation rate by 1% will reduce the costs of crime by approximately $1.4 billion dollars per year in the U.S.

These estimates suggest that the social return to education is similar to the private returns associated with higher lifetime earnings,which are also in the range of 7-10 percent.  Evidence suggests that the social returns to education are substantial and justify significant public subsidization of this activity.  It seems like we’d be saving money in areas like health care and the justice system, in other words:  sounds like a good argument against the ‘selfish’ moniker to me.

 

 

 

Leave a comment

Filed under Uncategorized

Student Protests in Québec: Signs of Youth Empowerment?

I’ll get back to my series of posts about our grandparents’ methods of education soon.  But  in the meantime, the student protests in Québec, and the reaction in the English Canadian media to them, have my attention.  I have rarely seen such virulent, ignorant, and prejudiced attitudes towards youth as in recent days.  The total inability of the English media to understand, or their unwillingness to understand, the situation in Québec is astonishing, and plays into several established memes of prejudice that I find unconscionable.  With your permission, I’d like to react to some of the recent events. I’ll start with the background, in case you’re reading this from outside of Canada.  In the past few months, hundreds of thousands of students in the province of Québec have taken to the streets in protest against Premier Jean Charest’s government’s proposal to hike tuition costs by 75% over five years.  Exacerbating the issue, the government (claiming that civil unrest warranted extraordinary powers for itself) passed a law on May 18th that limited fundamental rights of gathering, protest, and association for the students.  The English Canadian media has portrayed all this in such a way as to encourage those outside of the province to regard the protests as childish, selfish, violent, and unreasonable, and the government’s fascist response as entirely warranted.  I do not find that the issues have been adequately presented, and I am only saddened, not surprised, at the angry and bitter reaction from English Canada, who cite the comparatively lower costs of tuition in Québec as an argument for the students’ irrational and selfish mindset.   Let me try to explain.

Québec’s checkered history in education

Québec, for those of you who are not from here, has a long history of oppression in Canada.  After the defeat of Montcalm’s forces at the battle of the Plains of Abraham near Québec City in 1759, at the end of the Seven Years’ War, the survival of French Canadian culture under British rule has been a difficult question.   British Imperialism was at its height, and historically speaking, their treatment of conquered minorities in the colonies was harsh, with powerful incentives and policies of cultural and linguistic assimilation being the norm worldwide.  Colonial education has a very complex and mostly negative effect on these minority groups. The pattern of education in Québec matches that of other postcolonial nations. In such places, education is often used as a weapon of assimilation. At the same time, it is made difficult for the members of the minority to benefit from the process. In Kenya, for example, the colonial English school system, set up to ‘civilize’ the Africans, produced only a paltry number of university graduates, well into our own times. The crisis here at home in Aboriginal education is well documented, including but not limited to the Residential Schools.

During the period of decolonization, attempts at reform to colonial education were made throughout the former British Empire.  In Québec, the period of the 1960s brought enormous social change in the form of the Quiet Revolution.   The formerly Catholic-church-regulated elite education was challenged, and a more egalitarian model was put forward.  Before the Revolution, nearly half of all Québec youth were dropping out of school by age 15.  Education levels lagged far behind the rest of privileged, English Canada.

This historical and cultural context is almost entirely ignored in the Canadian press.  Former  Parti Québécois premier Jacques Parizeau recently pointed out the connection, but this was largely unmentioned outside of Québec itself, probably because most of English Canada does not understand or remember the significance of the Quiet Revolution.  Instead, the situation is filtered almost entirely through the cultural lens of the English majority outside of la belle province.  Students, through this lens, are seen as entitled, spoiled brats who do not understand the value of a dollar, and whose irrational protests are merely an excuse to riot and party in the streets.  After all, since the hard-won changes of the Quiet Revolution, tuition fees in the province have been historically lower than in the rest of Canada, where we have let tuition rates creep up over the years, on the idea that education is a commodity, a privilege, something to be bought, not a fundamental right such as we perceive health care to be, funded by taxes, meaning a public investment.

That attitude is not something a minority group can afford. What is the best indicator of whether a child will attend university? Whether his parents went before him. But that ball has to start rolling somewhere. High tuition fees, and the crippling debt that comes with university education, are deterrents to anyone but the privileged, and those who are not faced with an uphill battle in society to begin with. So, during the Quiet Revolution, it was decided that education would be seen as something fundamentally necessary for the advancement of Quebec society — not, as elsewhere, as a “nice to have”, but a right.   The corporatization of education is not something that I fundamentally agree with even in Ontario, where I work; resistance to that mindset is both refreshing and hope-inducing.

My real point here is that the socio-historical situation in Québec is fundamentally different than in the rest of the country.  You will not be able to understand the reasons behind the protests, or their massive popularity, if you attempt to view them from your own cultural background.  Unfortunately, from what I’ve seen, the English media, and those who read it, are making little or no attempt to understand, but only to denigrate, belittle, and condemn.  This makes me sad.

In my next post, I’d like to address some of the misunderstandings that stem from using the English cultural lens to try to understand  the protests.

Leave a comment

Filed under Quebec protests

Are Lockdowns Really Necessary? And Why Can’t We Discuss This?

A little while ago, I posted on the subject of risk assessment in education, and how educators, like politicians, reporters, and all other humans, are just terrible at it.  Among the issues I mentioned in the post was the subject of school shootings.  Here is the relevant paragraph:

To my knowledge, there have only ever been ten acts of gun violence in Canadian schools since 1902.  The total death toll was 26, more than half of which came from a single incident at the École Polytechnique in Montréal.  One came from a school in Alberta where a friend of mine was teaching, eight days after the Columbine case in the U.S.  If you estimate the total number of students in Canadian schools since 1902 (hard to tell:  there are 5.2 million kids in school today, NOT counting universities and colleges; multiply that by 110 years and skim a bunch off for the smaller population in previous generations….you still get several hundreds of millions), and figure those 26 unfortunate people into that number, the chances of dying in a school shooting in Canada are too small for my calculator to measure without an error message.  But every year, we now have to suffer through “Lockdown Drills”, officiated by the police, where we all have to pretend there’s a maniac in the halls.  Time is wasted, kids are frightened, and money is spent for no good cause.  Remember, all violent crime is on the DEcrease, very dramatically.  Polls show that children’s safety at school is the single most common crime-related concern, and yet the school environment is statistically, indisputably, the safest place for kids – much safer than the home or the street.

I argued this at the school I work at, with the result that I got a lot of flak from colleagues who either a) took issue with me questioning lockdown drills, which they regarded uncritically as a clear benefit to our society, or b) took issue with me “wasting time” by thinking critically about policies that affect us all.  Out of this somewhat one-sided discussion came a number of interesting points that I thought warranted a separate post.  I also emailed  Lenore Skenazy, the author of Free Range Kids, and she put the question to the readers of her blog, who repeated a number of questions and assumptions made by my colleagues,  that to my mind are missing the point, since they are framed by the assumption that lockdown drills are the only real response to the threat of gun violence at schools.  I take issue with those assumptions, and I’ll try to explain why here.

1.  Likelihood of Danger

 First of all, I’m not convinced that the threat of a school shooting is severe enough to warrant this kind of attention. I totally understand the perceived reason for lockdowns.  I do.  I really get the fear that comes with kids, and guns, and the potential for disaster and loss.  God forbid that anything should happen, as we all say.

 That said, here are some numbers that as far as I can tell are correct:

 1.      There have been ten incidents of gun violence in Canadian schools over the last 100 years.

2.      The total casualty number is 26.  More than half of those were at Polytechnique, in 1989.  This is a date that comes before the dramatic statistical drop in incidents of violence after the 1990s.

3.      The stranger-as-gunman situation is not the norm.

4.      There are about 5.6 Million Canadian students below the postsecondary level right now.

5.      That gives us a pool of tens (hundreds?)  of millions of students, a century of recorded time, and 26 casualties, which statistically gives us such a tiny risk that it is treated as zero.  “De minimis” is the term.

6.      Compare this (for example) to the possibility of dying in a car crash:  1 in 6000, approximately.  Driving to school is thousands or even millions of times more dangerous – and a real risk – than the vanishingly small chance of a shooting in school.

 I can say with confidence – way more confidence than I could discuss even safety from shark attack – that a school shooting will not happen here.  In fact, there is nothing in my life that I can say for certain will never happen, but this is about as close as it comes.  I cannot have that same confidence that students will survive their car ride home.  We say that we are preparing for something that might happen:  but the list of things that might happen is long, and we can’t work that way.  We have to work with what realistically has a probability of happening.  We are actually preparing for something that statistically will not happen.  So, the reality is that we are doing this.  Things we do have real impacts —  more so than things that will not happen.

 Even if we do accept that school shootings are rare, people sometimes argue that lockdowns are useful in response to incidents of violence in schools other than school shootings, such as knives being brandished, etc.   One of my colleages mentioned his experience of such incidents in support of this theory.  But again, these incidents are extremely rare and getting more rare.  ALL violent incidents are dramatically down, inside schools and outside.  I think the availability heuristic might be at work here.  Just because we can recall an incident to mind does not mean that it is actually more likely to happen.

 People also mention the potential usefulness of a lockdown in the case of a bomb threat.   Has there ever been an incident involving an actual bomb at a school?  I can’t remember hearing of one.  There are plenty of threats; in fact, the school I teach at suffered a rash of them recently once some students found out that our reaction to such a threat (school-wide lockdown or evacuation) was so extreme and disruptive.  If you want to disturb shit, get attention and disrupt classes, what could be better?  It’s the go-to strategy of the sociopaths among our student population.  We’ve had way more fake bomb threats based on the understanding that we will react dramatically than we have had real threats.

 Strangers in the building?   Recently a stranger came into our school .  He entered, went to the bathroom, and left.  We went into lockdown, and though our principal calmly soothed fears by telling students over the P.A. when it was over that he had done no harm, I asked myself why on earth that it would be assumed that a member of the society we all live in would automatically have dastardly intentions.  He probably had to pee.  Why would we assume the worst, all the time?  This says way more about us, in my opinion, than about “strangers”.

 As Mark Twain said, “The trouble with the world is not that people know too little, but that they know so many things that ain’t so.”

 2.  Cost / Benefit Analyses

 People have said that the costs of NOT having drills might be very high, whereas the costs of doing them is nil, aside from some time lost.  The problem with the typical cost-benefit analysis of a lockdown drill (aside from not factoring in the real monetary costs of having the police at the school for several hours while more than a hundred staff members are sitting in the dark, not teaching) is again that we don’t know if it’s true.  Do these drills in fact help to reduce cost in terms of human life?  What is their cost in terms of quality of life?  Where are the studies on this?  I understand that we’re mandated to do these, but I would really like to wonder about the resources allocated to such things.

 In addition, could we think about the perceived potential benefits of performing lockdowns, compared to the real effects of actually doing them?  We might think about possible social repercussions of  normalising paranoia (that’s what it is; it’s not a realistic risk), and the anxiety that we produce.  I also wonder how many other, more pertinent risks we are ignoring.  Why is First Aid not a priority, for example?  In explanations I’ve read for the mandated 2-a-year lockdowns, fear of litigation is the most prominent reason given.  What if something happened, and we hadn’t been seen to “do something”?  My fear is that our response is like the one from “Yes Minister”:   in syllogistic format:

 a)  Something must be done.

b)  This is something.

c)  Therefore, this must be done.

 Could time and money be spent on stopping the bullying that some people claim produces school shootings?  Or on building community? Are we reacting emotionally (or politically) to irrational fears —  in other words to symptoms?  How can we stop doing this and get to the root of the issues which (rarely) produce problems?  I don’t have the answers, but it seems to me like we might not be asking very good questions.

 The social costs, on the other hand,  seem to be real.  “Lockdown” is a term that had its origins in jails.  Now it’s common parlance in schools, where we are MORE safe than ever.  Just look at these figures:

                a)  In the U.S., which has ten times the number of students we do (and better statistics; hence my use of data from south of the border), the rate of “serious violent crime” in 2004 was 4 per 1000 students.  That’s down to less than 1/3 of what it was in 1994.

b)  In the U.S., in 1997-1998, at the height of the statistically anomalous spike in violence during the 1990s, the average student had a 0.00006 % chance of being murdered at school.  That’s 1 in 1, 529, 412.    And the risk has shrunk a lot since then.

c)  Studies that I have read indicate that the kind of lockdown drills that we do, where kids are sitting or lying on the floor, are the least effective and most anxiety-raising.  Aside from the godawful lockdowns that happen in the States where people actually roleplay shooters and bloodied victims, they are the worst.

d)  Remember that the risk of your child being a victim of a school shooting is effectively zero.  But in 1997, a poll showed that 71% of Americans said it was “likely or very likely” that a school shooting would happen in their community.  One month after Columbine, 52% of parents feared for their kids’ safety at school; 5 months later this was unchanged.  Why are we allowing public policy to be decided when the data and people’s fears are so unbalanced?

e)  Significantly, media “feedback loops” continue.  Here at home, do you remember the “anniversary” episodes of both Polytechnique and (for a whole WEEK!) 9/11 on the local news?  Even the normally moderately sane CBC was guilty of this.

f)  Politicians also ought to make it clear that schools are safe.  Instead, they don’t take the political risk of appearing not to take the “problem” seriously.  Once again, schools are safe.

g)  In the U.S., vast amounts of money are spent on metal detectors, police presence, and other invasive security measures that drain monies away from books and events at school.  It also creates an oppressive atmosphere, which is unnecessary since less than 6% of students are reported to carry weapons of any kind (even pen knives) to school.  Fewer still would use them in a violent manner.

h)  The adoption of “zero tolerance” policies towards violence actually was found to INCREASE bad behaviour and dropout rates.  The APA called for them to be dropped.

i)  Studies also show that schools operate best when they are connected to the community in strong ways.  Treating all strangers as homicidal maniacs does not seem to strengthen community.

j)   1 in 5 parents report “frequently” worrying that their child will come to harm at school.  Another 1 in 5 worry ‘occasionally’.

k)  In the U.K., parents are so anxious when their kids leave the house that in 2004, a poll found that 2/3 of parents experience anxiety WHENEVER their kids are outside the home.  1/3 of kids NEVER GO OUT ALONE.  The result is that almost half of British kids stare at screens for more than 3 hours a day.  Child welfare likened it to being raised like “battery chickens”.  As far as I can tell, there has been only one school shooting in the U.K., in Scotland in 1996.  It took a total of three minutes from start to finish, and I am unsure of the efficacy of a lockdown situation there.

l)  In 2007, a group of 270 child psychologists from across the Commonwealth and U.S. wrote an open letter in a British newspaper, declaring that parental anxiety over “stranger danger” may be behind “an explosion in children’s diagnosable mental health problems”.  They advocated a return to unstructured unsupervised play as part of a remedy.

m)  The usual “better safe than sorry” also doesn’t take into account the prolonged angst that comes with feelings of living in a dangerous environment.  This we do know causes feelings of helplessness, listlesness, and depression, all of which negatively affect learning.  This should come into the equation somewhere if we’re serious about education.

 3.  Emergency Preparedness

 People say that it’s best to be prepared.  Okay.  That’s a good enough sound byte that the Boy Scouts use it as a motto.  But prepared for what?  How?  Analysis of the few historical incidents relevant to the discussion seem to indicate that lockdown procedures might not have helped during Columbine or other similar situations.  As Dan Gardner points out, we are really, really bad as a species at predicting major events.  The Black Swan theory of historical events holds that nearly every significant historical game-changer was unpredicted, and probably unpredictable.  That’s part of why they’re so potent.  Not knowing where bizarre, unpredictable events will come from is just part of life.  We really ought to prepare for things that are statistically likely to happen.  Jumping to the worst-case scenario is just feeding the beast.  You should see how much money gets allocated to things like “security experts” these days.  Their job is to think of the worst, most horrible things that could possibly happen, given egregious circumstances.  That’s not really being balanced.  And they make money by doing so.  These are the people who helped to kill community by teaching kids to scream “stranger danger!” and run away from people in their neighbourhoods, when all good data tells us that they are much more likely to be abused at home.  Nobody really predicted the various outré events like 9/11 or Columbine; nothing that happened during those bizarre events matched anything that people had imagined.  The amount of money that the TSA makes by groping airline passengers is grotesque, and I’m not convinced it increases security.  Cui bono? as they say.

 We also don’t know if THIS PARTICULAR reaction to a threat (whether that threat plausibly exists or not) is the most appropriate.  This is not the only conceivable reaction to a perceived threat.  We don’t know if a lockdown drill does increase safety or minimise anxiety, as is the claim; in fact, the only studies that I’ve read concerning lockdowns have been on the other side of the question, with psychologists questioning whether they increase anxiety, particularly in students who come from high-risk, high-stress backgrounds like war zones, or so forth.  Of course we’d all rather be safe than sorry.  But does this particular thing make us safe?  And are we unsafe from the get-go?  From what?   Even if we are unsafe, which I don’t yet see evidence for, I want to know as a teacher and a member of society that we’re not just sticking bananas in our ears to keep the tigers away.   Remember the duck-and-cover drills kids had to do during the Cold War?  We laugh at those now, and I am sure that we will laugh at our own foolishness in the future.

 Not only that, but the idea that this kind of lockdown drill is somehow proactive is silly, too, in my opinion.  Our model remains a post-facto reaction to something that is clearly already out of control.  And it involves the violent intervention of paramilitary actors in the role of police SWAT teams.  If there’s one thing we’ve learned about the perception of violence, it’s that it tends to ramp itself up.  If you want to be truly proactive, seek out the root of violence and address those issues before they add to the statistics.  Not that the statistics are really even worth worrying about.  The message ought to be that school is the safest place you can be, where you can send your kids to be statistically multiple times more safe than at home or on the street.  We’re acting like we know this lockdown stuff works, and that it’s the only option.  In fact, there are no good studies yet to tell us if this is true. To affirm that, even if action of some sort does turn out to be required, it is only this action, and no other, is illogical.

 4.  Protection of Children

 Statistically speaking, the most likely way for a kid to die in North America is because of a car accident.  Every single time a kid gets in a car, she has about a 1 in 6000 chance of dying.  That’s real.  In my opinion, if we want to increase safety, we would focus on the daily carnage that are North American roads.  Why are we so blasé about driving?  It is actually quite likely to kill our students.  I have several times had to bear this bad news to classrooms full of students whose friends have just died in car crashes.   This is hypocrisy of the highest order.  Do we care about kids’ safety, or do we just care enough to want to look like we’re doing something without dramatically inconveniencing ourselves in the process?  We can legislate lockdown drills for things that statistically will not happen, and “let the schools deal with the problem” while we go about our business of driving around in our dangerous cars, killing kids.  In fact, our stupendous stupidity at risk assessment has created a situation where more kids are hit by cars driven by parents who are driving their kids to school for fear of safety issues than by anybody else.  The irony drips.

 So let’s just be clear here again:  I’m not anti-safety, and certainly not anti-children (thanks, black-and-white thinkers!).  I would like to increase safety by figuring out what that means and how to do it.  And I would like to do that without contributing to any corrosion of the society I live in.  If we want to really make a huge difference in student safety, we might think about public transportation, for example, which would get them out of cars going to and from school.  That would save lives for sure.  And how about resources to create anti-bullying campaigns that promote acceptance and even affection between all members of the student body?  That would have saved a life in this city recently, where a young man committed suicide after being bullied for being gay.   Sadly, more kids kill themselves than each other.  In fact, it’s the second most likely way a young person will die, after car crashes.  It seems like there’s a danger of complacency when we think of safety in terms of lockdowns, and not focus on deeper matters.

 Now, the good news is that the OLD reason for kids dying, i.e. disease, is mostly way down, including things like cancer.  Cancer rates are down, cancer in kids is down, and mortality for kids with cancer is down.  This should be a good news story.  According to Steven Pinker’s newest book, we are living in the least violent, most peaceful, safest, healthiest, longest-lived, most leisured society on Earth since the beginning of humanity.  There are fewer wars, they last less long, and take fewer lives, than ever.   We should be the least anxiety-ridden people ever to walk the planet.    So check it out:  we ARE safe.  We’re the safest human beings have ever been.  Anybody who tells you different may be selling something.  Despite our relative Über-safety, though, anxiety levels – particularly amongst teens – are WAY up.  We have to try to accept some of the societal blame for this, and for the consequences of teen anxiety and depression, which as I said, is the second most likely cause of death of young people.  We’re safe, but we make up fears for ourselves to fill the gap, and pass those fears along to our kids.  But we don’t even do that well!  Here is a short list of the kinds of things that we could realistically be afraid of, and spend time and resources on, based on statistical danger (again, most of this data is U.S. specific):

 The ‘flu still kills 36 000 people annually (the normal kind, not the swine or bird ‘flus, which frightened far more people than they actually killed).  Globally, the seasonal ‘flu kills about half a million people every year:  the swine ‘flu killed under 20 000, putting its global death toll at less than the normal yearly rate of U.S. seasonal ‘flu death.

68 people are wounded by pens and pencils every year.

According to the U.S. Consumer Product Safety Commission, there were 37 known vending machine fatalities between 1978 and 1995, for an average of 2.18 deaths per year.

3 000 people are injured by chairs at work or school.

2 944 people are injured by desks.

Photocopy machines injure 497.

1 241 people are injured by computers.

Clocks injured 74 people in 2001.

212 people were sent to hospital after encounters with telephones.

73 people are killed every year by lightning.

120 people are injured by toilets DAILY!!  (Read Dave Barry’s column or blog for statistics on exploding toilets.)

You have a 1 in 150 000 chance of choking to death every time you eat (not insignificant!  But the biological benefits of having your larynx in this awkward position, and therefore giving you the power of speech, outweigh the risk, even at those odds.  Nature, at least, seems to understand risk assessment!)

Even if you just sit quietly and do nothing, your chances of dying randomly at any moment are about 1 in 450 000, given the entire population of the U.S., which of course includes the elderly and ill.

 Those are some things that we might worry about if we were more rational about risk.   Instead, we worry about terrorism, child abduction, and school shootings, which are about as unlikely to kill our kids as sharks.  Okay, I know that when kids come into the picture, realistic assessment of risk goes WAY down.  That’s not anybody’s fault; it seems to be a common cognitive bias.  But come on!  Are we adults?  Can we not get over this?

 5.  Lockdowns and Fire Drills

 People often compare lockdown drills to fire drills, but I’m not 100% sure of how useful it is to make this comparison.  They seem to be many orders of magnitude apart.  But it’s a fair question: have many schools burnt down in the last century?  I’ve heard of it happening with tragic results; I think a lot of the new fire codes, including mandatory drills, came from such incidents.  Fires in general are pretty common, so I don’t know if it’s in the same league.  Let’s check the stats:

Good news – deaths by fire have been on the decline for the past several decades, though they’re still the third most common way people die at home. On average in the United States in 2010, someone died in a fire every 169 minutes, and someone was injured every 30 minutes.  Only 15% of these fires occurred in non-residential buildings.  So no, they’re not on the same scale at all.   And I’m afraid that the lockdown drill, although it is modelled on the fire drill, does not work from the same basic assumptions.  In a fire drill, you know what the situation is, and there are time-tested methods of dealing with it in an efficient manner by using behaviourist training.  Leaving a burning building doesn’t require training, but doing so quickly but in an orderly manner, and overcoming the common instinctive reaction to grab meaningful possessions, means that you have to program anti-intuitive behaviours into people.  That’s what drills are for.  When I was in the Army, we did drills to try to ingrain habits in us that countered powerful intuitions that, unfortunately, were dangerous.  When we smelled gas, we had to be trained to put our own gas masks on before we warned our platoon mates of the danger:  something like putting your own oxygen mask on in a plane emergency before helping a child.  It’s not intuitive, but it saves lives.  In a lockdown, we do not always know what kind of situation this response might address.  In such a situation, where there are no parameters, how are we to know that sitting quietly and waiting for police is actually the best strategy?

 As to the argument that fire drills and lockdowns reduce panic, there’s no evidence that people run scared when faced with unexpected events.  Hollywood has people screaming and running from everything from terrorists to Godzilla, but in real life this does not seem to happen.  People sometimes do kind of stupid things in emergencies, but there is not much evidence for panic like many people describe.  In the one case of a suspected gun at school that I have experienced, there was no panic, and I stupidly entered the building, thinking I could help somehow (it turned out to be a kid with a toy gun).   I was teaching in London during the Underground bombing in 2005, and for nine horrible hours I thought we had lost students.  I was dreading calling parents; it was really awful.  It turned out that things were so outwardly normal in the city that the kids (who had the day off and were shopping) had no idea anything was wrong, and therefore didn’t check in.

 Instead of fire drills, a better comparison might be to the “duck and cover” drills of the Cold War, when the baby boomers who are now forming public policy had to hide under their desks for fear of The Big One, courtesy of the Communists, who may or might not have been a bigger threat than the hypothetical gunmen we’re talking about.  The perceived risk of nuclear attack was always higher than the actual risk, even during the Cold War. Keep in mind that there were only ever two nuclear bomb attacks on anybody anywhere, neither of them by a Communist regime. Even the Cuban Missile Crisis, we are now learning, was a long way from the near-annihilation that was in the papers. When I was in the Army, our field manual showed us the response to a nuclear blast, which was to lie down on the ground and point our helmets at the mushroom cloud. The whole thing is absurd, and is (mostly) remembered by sane people as absurd.  My feeling is that these drills will be too, once we’ve either calmed down or moved on to the next paranoid delusion to grip our fragile minds.  Remember, we’re talking about either things that essentially, statistically, DO NOT HAPPEN, or whose risks are in the millions to one against, which is more or less the same thing.

 6.  A Lack of Emergency Preparedness Sank the Titanic, Didn’t It?

 A criminal type of insouciance led the Titanic’s owners not to anticipate disaster, and not provide enough lifeboats for all the passengers, with tragic consequences.  They claimed that having lifeboats on board and emergency drills would cause undue panic.  Doesn’t that prove the need for such legitimate drills? If not this, what are the legitimate reasons a lockdown might take place?

 Using the term ‘legitimate lockdown’ is unfortunately tautological.  The effectiveness of lockdowns is what is under question here.  Again, what assailants are we talking about here?  Who are they?  We are talking hypotheticals.  “What if” is rarely a useful question to ask when assessing risk.  Seriously, who are we afraid of?  Once we identify them, we can figure out whether they’re worth worrying about.

 I also don’t see that the analogy to the Titanic is warranted.  Arrogance, not adherence to facts, made them under-supply the ship with lifeboats.  I’m not advocating that we do nothing in the interests of security.  I’m just saying we need to look carefully at what is reasonable, and address issues that actually 1.  happen , and 2.  we can do something about.  The Titanic is actually a good example of NOT taking a reasonable precautions to risk.  It was not actually all that unlikely that lifeboats would be needed, and certainly if they were needed at all, everybody would need one. Even in modern times, “Two large ships sink every week on average,” says Wolfgang Rosenthal, of the GKSS Research Centre in Geesthacht, Germany. That’s about 100 every year.  I imagine it was even worse leading up to 1912.  The line about causing undue panic seems like somebody’s excuse for bad planning.

In fact, one might argue that it was complacency that was created by newfangled security measures (the system of bulkheads) that sank the Titanic. They had security measures in place, and due to an extremely unlikely turn of events, those weren’t effective — whereas the backup safety measures that might have saved people’s lives were ignored because of a feeling that safety concerns had already been addressed.  Complex interrelationships of unlikely events,  poor decisions, and human failings sank the ship, and those are things that are extremely difficult to plan for.  That said, again, the reason we all know about the Titanic in the first place is because of the phenomenally unlikely circumstances that led to the incident, as well as the press reaction to the event.  It was in the news because it was rare.

 7.  Don’t Spout Statistics:  These Are People’s Lives!

 Precisely.  So let’s start thinking honestly and rationally about what puts them in danger, and then deal with them effectively.  So, once again, let’s get some perspective.  We are, by all accounts, the safest people ever to walk the planet.  But this seems to make us adjust our criteria for risk downward, filling in the anxiety gap with more and more trivial worries.  We are the most risk-averse society that I have ever heard or read of.  Although the possibility of reaching zero risk is impossible, a study by a professor at Ottawa U. finds that most Canadians think that it is possible.  Not only that, they expect the government and institutions to provide it for them (!)  Considering how important risk is to normal cognitive and social development, I find this very troubling as an educator.

 There are good examples of how realistic awareness of risk might have prevented tragedy.  In addition to the statistic I quoted above, where the majority of traffic injuries involving children are caused by anxious parents driving their precious bundles to school, there are others.  This one is for everyone who thinks that we ought to have a plan to deal with emergencies:  You’re right.  I am all for safety.  The question is a matter of finding out what actually makes us unsafe, and then dealing with those things in a way that actually improves safety, while not compromising quality of life more than is necessary.

 The TSA, for example.  I don’t think anyone has successfully shown that the horrorshow that is U.S. customs and security actually adds much to actual safety.  It detracts way more from quality of life, dignity, privacy, and common decency than it adds to safety.  And the issue that it supposedly addresses, while terrible and frightening, is astronomically remote.  Terrorism is down, too, if anyone’s wondering (with the exception of within the state of Israel).  In order for taking a plane to be even close to as dangerous as driving (which we all do, and NEVER seem to question it, despite the recent seeming glut of people being mowed down by cars in the city in which I live and work), terrorists would have to hijack and crash a plane a week for months, AND you would have to get on a plane daily.

 In fact (getting to the point), after 9/11, many people cancelled flights out of fears of crashing planes, and got into cars to take their trips.  Someone crunched the numbers and found out how many people died unnecessarily in car crashes as a direct result of that decision:   Turns out it was close to 1600, or about half the total life cost of 9/11, including the terrorists.   It’s six times the number of people on the planes that crashed.  That’s 1600 people who tried to make the right decision, but are dead because they didn’t take the actual facts into account.

 It just seems like instead of reacting to risk, we could respond to it and try to make sure that what we do to address it doesn’t either miss the boat or make things worse.  Safe is good, but we have to define our terms.  Driving feels safer than flying, because we think we’re in control, but it is one of the single most dangerous activities we can partake in, unless we’re deepsea divers or active-service paratroopers.  I’m all for CPR and first aid.  The chances of needing those skills are actually quite high:  in the U.S., heart disease killed 700, 142 people in 2001.  If they SEEM rare, that’s part of our perceptual blind spot.  Things that seem rare or safe are often quite dangerous, and things that seem (emotionally?) to be dangerous are often not worth the angst.

 8.  Risk and the media

 Much of our risk aversion comes from the “if it bleeds, it leads” mentality of media coverage of events.  A lot more of it comes from lawyers.  Statistically, I would bet that your chances of being sued for some improbable event would greatly outweigh the chances of the original event happening in the first place.  I have, though, weirdly enough, also read articles that suggest that the number of frivolous lawsuits actually brought to court in North America are much smaller than most people assume – there seems to be some evidence that the insurance companies are actively encouraging a sense of the overwhelming use of frivolous lawsuits in popular culture so that they can justify higher rates. On top of that, a major beneficiary of the Cult of Fear is the horde of manufacturers of Safety Products, who prey on irrational paranoia.  Free Range Kids details some of the more egregious examples of a manufactured crisis with expensive manufactured cures:  baby kneepads, for instance, for crawling tots.  As if thousands of generations of infants had evolved to crawl “unsafely”, just waiting for the right product to correct nature’s deficiencies.  Sigh.  So, in answer to my earlier question of Cui Bono:  “Too many dubious sorts of people”.

 One of the major factors that affect our minds’ perception of risk is what Daniel Gardner, who wrote a book on the subject and spoke eloquently at a lecture I attended a couple of years ago,  calls a “feedback loop” generated by media.  The original noise is picked up and amplified in a kind of echo chamber, and this escalates the brain’s response to threat in ways that we could never have without the media’s involvement.  Reporters are people too, and their risk assessment tools are just as terrible as the rest of ours.  Their choices, though, have social effects that are wide-reaching.

 In Canada, where I live, there was a news story recently that, though local in nature, became a national story:  a doctor at a clinic in Ottawa had improperly sterilised her colonoscopy equipment, and around 5 000 ex-patients were being contacted by letter, informing them of a remote risk of infection by Hepatitis or HIV.  The odds against HIV infection were more than a billion to one against, which so far exceeds the “de minimis” rule that it kind of shocked me that it was even mentioned.  The media went nuts, probably because of the shock value attached to AIDS-related material.  The authorities hesitated to contact the media at all about the matter, perhaps knowing what kind of a zoo it would become.  When the media got hold of that fact, of course, it was played to the hilt.  It was made to look like a conspiracy; at the very least, the mostly-risk-uneducated public felt that they were being patronised.

 The next bit of newsworthy material that came out of this story was that clinics around the country were worried about cancellations of important colonoscopic procedures by people who had heard the news item and lost confidence in the procedure, thereby creating a real risk due to undiagnosed conditions that could easily eventually become fatal.  This was creating a situation somewhat reminiscent of the anti-vaccine movement that puts thousands of people at risk based on ignorance, fear, and bad science.

 I’ll add this:  Information is not automatically a good thing, though I definitely want as much as possible in order to make decisions.  How we use that information to make decisions is just as important.  I partly agree with those who say that the witholding of information can seem patronising, but since the medium is the message, how that information is disseminated and presented is enormously influential.  The Feedback Loop that Gardner describes is an unfortunate, but real, byproduct of the way media produces stories.  And until schools’ curricula start to focus a lot more (as we have hopefully begun to do) on things like cognitive blind spots, logical fallacies, analysis of information, correcting lazy thinking, etc., and until politicians’ use of language is held to a higher standard, we’re going to have to deal with a question mark as to how people are going to react to risk.

7 Comments

Filed under Uncategorized

Ten Things Our Grandparents Got Right #5: They Didn’t Treat Teenagers Like Infants

Picking up from my last post’s ellipsis, I feel I need to address the infantilisation and outright ageism displayed by adults toward teenagers. This rather repugnant reincarnation of genetic determinism (for which there is no good evidence, and against which Stephen J. Gould spent much of his career combatting) is particularly dunderheaded when you take into account the plasticity of the brain, just now beginning to be understood. “Don’t talk for more than ten minutes on any subject”, we were told in teachers’ college, “because the adolescent brain has an attention span that tiny, and there is nothing anyone can do about it”. The contradictory complaints that attention spans are getting smaller, often iterated by the same people, never seemed to present a serious challenge to the accepted wisdom. Surely if they can shrink, they can also grow.

Under the deterministic model, adolescents’ potential is viciously undercut, and a condescending attitude of pandering to existing biases, tastes, knowledge, interests, and capabilities is adopted, with real change or growth is almost completely negated. “Teenagers are lazy and surly because of physiology, or perhaps hormones”, people say, though there is absolutely no record of this being true  either in historical records or even in other cultures existing today. And the list goes on: teenagers are incapable of making rational decisions because of brain chemistry, not because they are systematically denied the opportunity to practice making good decisions on a daily basis. Teenagers are not punctual because of circadian rhythms unique to them, and not because of poor sleep and nutrition habits that are actively encouraged by our society.   Here’s an article that actually suggests that teens’ supposedly biologically based inability to process risk effectively might be the result of sleep deprivation!   Knowing the effects of sleep deprivation on the human psyche (I am an ex-soldier), it surprises me that nobody has made that link earlier.  I have heard more than once the lament that “if we really wanted teenagers to pay attention, we wouldn’t hold classes before ten o’clock”. Recently, this has been challenged, the only question remaining being how such a parochial view could have survived this long. Anyone who travels outside of North America or who has read history (think Alexander the Great, Joan of Arc, Augustus Caesar, Mary Shelley, Louis Braille, et al.) will shake his head at such statements of the inevitability of “the teenage brain” and its limitations. The list of historically significant teenagers is as long as my arm, at least until about the middle of the twentieth century, when they suddenly became incapable of surviving the most basic of situations, such as wearing hats that light up .  This photo is taken from the wonderful Free Range Kids blog, which also has so many first-hand anecdotes of infantilising in America that it sometimes makes me afraid to read it.

Fourteen-year-olds would burst into flames.

For starters, the concept of the teenager as a separate class of individual, or a distinct stage in life, is a very recent coinage – nobody used the term before 1941. Now the invention of the ‘tween’ is pushing it even further: it is only attested to since 1988. The theories as to exactly what purpose the invention of such a repugnant, incapable figure was supposed to serve vary, but John Taylor Gatto, Neil Postman, and Dr Robert Epstein have some suggestions.

Postman, in The Disappearance of Childhood, argues eloquently for the phenomenon of childhood in general being a socially constructed event. He points out that the idea of childhood, a time of life in which one is supposed to be controlled by a sense of shame and protected from such things as knowledge of adult sexuality, was a product of the end of the Middle Ages and the rise of the printed word. A boy of seven years old was, for all practical purposes besides “making love and war”(p.15), capable of every meaningful act in Mediaeval society. He could speak, and do labour, and in a predominantly oral culture, these are all that are needed for maturity and inclusion in the social structure. There is no need or possibility in a Mediaeval culture for the keeping of secrets; privacy was hardly a concept at all, and close quarters and the lack of any need of reading skill made knowledge a general commodity.

But when the written word became the new means to record, keep, and guard the culture’s knowledge base, institutions like educational systems were needed to induct the child into the world of adults. This effectively stretched the time of childhood from seven years to the end of schooling. As Postman points out, before the 16th century, there were “no books on child-rearing, and exceedingly few about women in their role as mothers […] There was no such thing as children’s literature […] , no books on pediatrics. […] Paintings consistently portrayed children as miniature adults […] The language of adults and children was also the same. There are […] no references anywhere to children’s jargon prior to the 17th century, after which they are numerous.” (18). Children did not go to school, because there was nothing to teach them. But now the definition of childhood changed, from one based on linguistic incompetence to one based on reading incompetence. Instead of just becoming an adult by ageing, children had to earn adulthood through education – and the European states invented schooling to accomplish this new process. Childhood, as Postman notes, became “a necessity” (36).

Later, with industrialisation, threats to this newfound idea of childhood emerged. The new urban demand for factory and mine workers supported the “penal” aspects of schooling to break the will of the child and accustom him to the routine labour of factory work. In response, child labour laws were introduced, enshrining the concept of the sacrosanct nature of childhood. Though Postman sees the growth of elementary education after 1840 as evidence of the triumph of the notion of childhood over industrial capitalist concerns, J.T. Gatto sees it somewhat differently.

Gatto, an award-winning teacher who speaks now against institutionalised education, argues that the modern American education system never outgrew its penal origins, and in fact goes further, saying that the system is set up more or less deliberately to bring about the class of uncritical, bored, dissatisfied consumers that is important for the corporate model of capitalism to flourish. Children were being actively groomed by industrial influencers of education systems to become not citizens or human beings, but “human resources”, to be moulded to fit something called a “workplace”, “though for most of American history American children were reared to expect to create their own workplaces.”

The subdivision of childhood into adolescence, and now, pre-adolescence (the “tween” phenomenon) is something that Robert Epstein has written on. Epstein, in his book The Case Against Adolescence , argues from the point that Postman and Gatto leave off, during the industrialisation of America. He sees the creation of the adolescent as a kind of benevolent but destructive side effect of the social reforms that were reacting against the admitted evils of the Industrial Revolution with regards to children’s rights. The creation of institutions such as child labour laws, compulsory education, the juvenile justice system, and the age-specific restrictions of “adult” activities such as driving, drinking alcohol, and smoking, according to Epstein, had the effect of isolating the child’s world from the adults’ almost totally. They are confined to a mostly (by definition) developmentally incomplete peer group, and their dependency is extended by more than a decade before they are required to enter the adult world after school – this despite the fact that their sexual maturity and mental readiness for such a transition are evident from a much earlier age. In a study, Epstein found that teenagers have ten times as many restrictions placed upon their behaviour as normal adults, and twice the number as felons and soldiers! The rise of incidences of such restrictions exactly parallel industrialisation, and jump significantly after World War II.

Epstein picks up an argument from Postman, and suggests that the studies that purport to “show” the biological cause of the supposedly innate surliness and incapacity of teenagers are flawed, in that they show only correlation, not causation. In fact, given the plastic nature of the brain, I myself would expect to find that such correlations are in fact backwards, meaning that the social restrictions on teen behaviour are in fact to blame for the state of their brains. The argument that brain scans “prove” the innate uselessness of teenagers in such areas as risk assessment or impulse control sound to me about as useful as “scanning” the musculature of a teenager who has never lifted weights, and declaring them “unfit” and biologically incapable of ever being an athlete.

In fact, the list of famous “characteristics” of teens proves to be mostly made up. Margaret Mead points out in her studies of adolescents in Samoa that the traditional “storm and stress” of North American teenage development is nowhere to be found in that culture, or any other preliterate culture. There is no term for adolescence in the majority of such societies. Even the list of undesirable teen behaviour in our own society, summarised by Philip Graham as “identity confusion, extreme moodiness and high rates of suicide, violent discord with parents, aggressive behaviour, intellectual and emotional immaturity, risk taking, and sexual promiscuity arising from the raised secretion of sex hormones” has been shown to be common to less than 20% of the age group in question. Hardly a useful list of descriptors, then.

And as to other biased assumptions about teenage behaviour, such as the idea that they are addicted to, and are misusing, technology?  Turns out we ought to have a good look in the mirror here too:  Adults were found in a study to abuse technology at a higher rate than their kids.  

Why are these ideas so pervasive and so tenacious? The original study of adolescence was done in 1904, by G. Stanley Hall. He observed the turmoil on American streets due to industrialisation and massive waves of immigration, unsupported by proper social structures. Concluding from this that all adolescents necessarily exhibited those nasty characteristics mentioned above, he drew on the now-long-debunked theory of biological “recapitulation”, in which the development of the individual mirrored the development of the species. In that model, adolescence “recapitulated” a savage, pre-civilised phase of the development of Homo Sapiens, and it would be expected that such a period would bring with it turmoil. He borrowed from the German Romantic idea of Sturm und Drang and applied it universally to all teens, claiming biology as the cause. Though the field of biology has long since abandoned such theories, the general public has not kept pace.

Of course, I have also found through my years of teaching that what is expected of a person is generally what one will get. As Eliza Doolittle says in Shaw’s Pygmalion,

“You see, really and truly, apart from the things anyone can pick up (the dressing and the proper way of speaking, and so on), the difference between a lady and a flower girl is not how she behaves, but how she’s treated. I shall always be a flower girl to Professor Higgins, because he always treats me as a flower girl, and always will; but I know I can be a lady to you, because you always treat me as a lady, and always will.”

Sadly, we live in a culture where the treatment of young adults is infantilising (do the test here!), demeaning, controlling, and stultifying. Perhaps it’s not entirely adults’ fault, though; as Postman points out, adults are the result of the same process of education that we subject our children to. Whereas once literacy was the dividing line between childhood and adulthood (ideas that were enshrined into the notion of the creation of a free state during the American Revolution), industrialisation also brought with it technologies that made actual familiarity with the written word obsolete. The telegraph, radio, television, and the Internet have taken over from where literacy left off, producing generations of adults who have had unfettered access to information, but no sequential, age-appropriate introduction to discerning its meaning. The very definition of childhood as an idea, not just a biological stage of individual evolution as it is now conceived of, depended on slowly being indoctrinated into greater knowledge through increasingly complex mastery of literacy. Now, who actually reads anything by people like Barack Obama, Stephen Harper, George Bush, or Ronald Reagan? Would they be rewarded if they did? Though our Canadian society has succeeded in producing generations of functionally literate people, we are increasingly reverting to a Mediaeval-style oral culture, in which even people who can read, generally do not, and most of those who do, cannot do so very well. The line between childhood and adulthood is blurred, and brain scans show an adolescent development well into the mid-twenties of North American subjects — “coincidentally”, this is about the same time as many post-secondary students are leaving school.   I would dearly love to see brain scans of people other than the Westernised college students who are the typical subjects of such studies. My intuition is that they would be vastly different at comparable ages.

The assumption that the tastes and interests of a teenager are equally fixed, never to grow, was made clear to me in a textbook on English grammar much in use several years ago, in which every sentence, in order to be palatable to what grammar-textbook publishers assumed teenagers’ interests were, had to have something to do with skateboarding. To me, this attitude is no better ethically speaking, and has just about as much science behind it, as the old idea of the genetic inferiority of slaves. The problem is, with bandwagoning, it’s difficult to get off the wagon, or out of its way. I once had a principal (a fellow with a science background, who ought to have known better) who hawked these unpleasant wares at every staff meeting and P.D. day, much to my annoyance. Years later, after his retirement, he admitted to me that he knew full well all along that it was bunk, but claimed that he found it a useful tool for management. He told me that “we have to work with something” – a foolish imperative that always makes me think of the show Yes Minister, where it was put in syllogistic form:

1. Something must be done.
2. This is something.
3. Therefore, this must be done. 

Teaching is often a surreal experience.

We were presented in Teachers’ College with the interesting model of Howard Gardner’s Multiple Intelligences, and told that we must adjust our teaching techniques to all of them, regardless of their relevance or applicability, because “students can only learn in certain ways”. Every lesson had to touch on as many of the Intelligences as possible, and administrators’ evaluations of teachers would be based on a handy checklist and cursory observation. Imagine trying to incorporate kinaesthetic learning into a lesson on punctuation or grammar! This led to all kinds of silliness , like hopping up and down to simulate semicolons, from which the better teachers miraculously managed to salvage some memorable learning experiences. Since then, Gardner’s theory has come under closer scrutiny, and has been largely debunked, at least in the absolutist terms under which it was adopted in schools. Here’s a quick video outlining the basic flaws in the theory: 

Far from being deterministic learning “styles”, they appear to be mere preferences, and there is no good evidence that pounding a round lesson into one of its square holes does anything to help learning at all. Instead, a good teacher will understand which kinds of tools are applicable and effective, given the nature of the ideas or skills being taught. In other words, according to Professor Daniel Willingham, “While there’s little evidence that matching one’s teaching style to one’s students’ learning styles helps them learn, there’s much stronger evidence that matching one’s teaching style to one’s content is wise.”

Why this obviously silly meme has stuck around for so long, and had such an impact on systems of education is a bit of a mystery, but I have the following observations, which might shed some light: The first half of the equation comes from good intentions, I think: most teachers or educators feel a calling and a social responsibility to their profession. We’re often caring to a fault, and this is an example of the ‘fault’: our predisposition to believe that our job involves finding the “hidden learner” in every student blinds us to the lack of evidence for this particular incarnation of that impulse. A kind of Confirmation Bias, if you will. The idea of Multiple Intelligences (which is a description of ability, not of style), bent slightly to suit our notion of being teachers who care deeply about individual students’ learning, is powerfully appealing. We want to believe in it, because it reinforces pre-existing beliefs that we have brought to our profession, but regardless of how admirable those beliefs might be from an ethical standpoint, if they do not fit the actual facts, they ought to be altered or abandoned. Recently, a study by Daniel B. Klein of George Mason University uncovered what he thought was a type of intellectual bias in Liberal-minded respondents to a survey. When it was pointed out to him that the survey he had provided might be biased, he re-wrote it, and found bias in those of Conservative bent. Then he wrote with some humility and intellectual frankness about his own Confirmation Bias – two attributes that my profession could certainly benefit from.

The second part of the reason this meme is so prevalent in schools, in my opinion, is not because it is correct, nor because it is touted by teacher ed. texts, (Daniel Willingham has looked at the course syllabi of Teacher Ed. Courses and found no evidence of it being ‘officially’ sanctioned) but because of the management models of evaluating teaching ability. When a principal is charged with evaluating the prowess of the teachers in his or her school, and has to report those findings upward to his or her own “managers”, the same silliness happens as when we are evaluating our students: we want to fall back on measurables. It’s a lot easier to carry a clipboard into a teacher evaluation and tick off “yes” or “no” to a question like, “Does the teacher address the students’ learning styles individually?” than to actually make complex judgements about a very fluid and complicated problem like evaluating “good teaching”. So it’s partly a question of efficiency, just as being forced by the requirements of reporting student learning (a vastly complex and mostly abstract concept) in terms of percentile grades results in us asking stupid questions on tests that focus only on measurable, concrete facts, rather than on the rather more important aspects of higher-level thinking. I once asked a question on a test that required students to place in order several events from a novel we were studying in class: something that assessed both their memory of the details of their reading, as well as their understanding of the cause-and-effect relationship between the events. I was forced to abandon the perfectly valid question because it is essentially ungradeable – as soon as one event is out of order, a domino effect takes place and makes it impossible to give a numerical evaluation of how close to being ‘right’ the student was. If anecdotal comments, or even a conversation, were the method of relaying to a student the quality of their understanding, I wouldn’t have lost a potentially valuable assessment tool. A managerial model of reporting quantifiables upward on a chain of command, ultimately to a political bureaucracy, just does not work when dealing with something as complex as human learning.

Partly, though, it’s more insidious than just the self-perpetuating efficiency of a system. Sadly, the two halves of the equation often come together in unsavory ways: when the principal asks “Is the teacher hitting enough of the learning styles in his lessons?” the implied subtext is often, “Is the teacher caring enough toward his students?” This puts a lot of pressure for the meme to become accepted, or at least unquestioned, in teacher circles, at least when administration is present. It’s an unspoken type of ad hominem : between the lines is the question, “Do you really care about children?” I think this is the method of preservation of a lot of silly educational buzzwords, actually: they’re tied to teacher performance reviews. A lot of it is just lip service, as is suggested by the number of teachers who in private conversations will question the meme, but it still has an effect.

I am calling here for a greater intellectual and moral courage on the part of teachers to stand up against policy that is not evidence-based.  Here in Canada, under a government that is apparently actively anti-evidence, this is a tall order.  But we’ve got to start.

Leave a comment

Filed under Uncategorized

Things Our Grandparents Got Right #4: They Didn’t Try to Educate Us for the “Future”

Part Two

 In the last post, I outlined the basic futility of trying to educate our children (“train” them, I suppose would be a better word) for a specific set of skills that would be useful under specific economic circumstances in the future.  I entered the job market, in my mid-twenties, at the very tail end of the 20th century.  My elementary school education, during the 1970s and 80s, could not possibly have prepared me for a job market within the context of a recession that nobody had predicted, and in which the major emphasis was on jobs in fields that had not yet been invented when I was going to school.  On top of that, several years later, the I.T. bubble burst, and all the jobs that were supposedly available to those with a very specific skill set suddenly disappeared.  Nobody really predicted that one, either.  In fact, there is good reason to believe that nobody will ever predict economic futures.

Employers, for their part, have been making it plain for years that it’s less important what specific software skills prospective employees come to them with than what skills in areas like problem solving, creativity, social adaptation, and communication they bring.  Training can always be done (and in my opinion, should be done, at the expense of employers, not the public) in situ for whatever tasks employees will be asked to perform.  The ability to learn quickly and efficiently from that training, by being punctual, polite, open-minded, critical, creative, and proactive is what makes prospective employers drool.  I’m not somebody who believes that the purpose of education is to provide employers with workers, but if you are, then it should matter to you that by all accounts, employers aren’t happy with the quality of worker they’re being given.    It seems that most of them would trade ten technically skilled applicants for a single well-spoken, well-socialised, clear-thinking applicant who can adapt and learn quickly.

 The problem with the future, as I’ve said, is that nobody knows what it will look like.  Its inevitability, though, makes us fill the yawning blankness in front of us with all kinds of hopes and fears – all of which come from our own past experiences, projected upon the future in a kind of collective psychological paroxysm of denial.  The future becomes a canvas upon which all of our present anxieties work themselves out in public.  There are some problems that attend the belief that we actually can educate kids for the future, though, and some of them aren’t as obvious as they should be.

First, there’s the danger of disregarding good ideas based on their novelty in favour of something that is comfortable, but has no good evidence to support its use.  The unconscionable resistance of schools to listen to the increasingly large body of evidence to suggest that grading not only does not assist in the process of learning, but is actively detrimental to it, has been going on far too long.  This is an enormous subject that really deserves a whole post to itself, which I will be glad to provide sometime later.  It is certainly possible to view the past with rose-coloured glasses, and ignore real harms done by practices which have the force of habit, but not of reason.  Often, the desirability of the practice in question is questioned even by its proponents, but urged anyway on the assumption that if it was bad enough for one generation, it ought to be bad enough for the next.  Sometimes this is accompanied by what Alfie Kohn has called the “BGUTI” clause, or “Better Get Used To It”, wherein the future is assumed to be filled with horrible arbitrary uses of power, for which we must train our children to submit.  This does not seem to me to be a noble ambition for our children.

Second, there is the danger of using this “Golden Age” of education disingenuously, as a way to discourage real progress.  Educational reformers, especially those who are advocating changes based on conserving parts of systems of education that have been proven to work well, are accused of “living in the past” and stifling innovation through their delusion.  Again, Alfie Kohn provides us with examples of the kind of “educational reform” sweeping through his nation, the United States, detailing how they are often merely disguised conservative movements, based in ideology rather than facts, and too often designed to line the pockets of those who put them forward.

Third, there is the danger of defining the ‘future’ in terms that are too narrow by far.  Too many educators see the “big picture” of the future of high school students to be the end of their four-year stint with us, and the awarding of the diploma.  After all, “studies have shown” that kids without a high school diploma are more likely to be economically and socially disadvantaged later on, right?  This is often seen to be the legitimate outcome of being deprived of the benefits of the type of education we offer, and not the result of rampant credentialism.   I always try to educate with the long-term goal of producing a thoughtful and mature human being who will continue to think and learn as long as their brains hold out.  And there seems to be good evidence that Alzheimer’s Disease can be mitigated by strong habits of thought, so I’m happy to consider the long term to be roughly “the rest of their natural lives”.  And maybe longer, if they teach their kids healthy habits of mind.

 Fourth, there’s the danger of throwing the baby out with the bathwater.  All of the posts about our grandparents’ “outdated” methods and ideas address this issue.  Certainly, they did a lot of backward, even harmful things in the name of education (many of which I abhor, and will address in later posts), but that does not mean that they had not found certain practices that actually worked.  Their nearly obsessive interest in penmanship, for example, though perhaps emphasised to the point of detriment to other aspects of learning, did have benefits that we miss, now that it’s gone from the curriculum.  Everybody has been through some sort of schooling, and everyone has had bad experiences, bad memories, and bad teaching at one point or another, all of which people insist on telling me about in detail the instant they learn that I am a teacher.  Learning has always been hard work, and ever since Shakespeare wrote about the “whining school-boy, with his satchel /  And shining morning face, creeping like snail /  Unwillingly to school” (As You Like It, II.vii.145-47), we’ve had to bear the brunt of everyone’s residual educational and social angst from high school.  The past, no matter how awkward, stressful, or frustrating, was not all bad, and it is worth preserving the better parts of what our ancestors came up with over many centuries of research and development.  This definition of conservatism in education I am all for.  But how, one asks, can we determine which parts to preserve and which parts to discard?  I would answer that anything that has been demonstrated to be harmful or detrimental in any way to the process of learning ought to be done away with as quickly as possible.  Anything that can be shown to reduce or kill hope outright, or poison students’ innate curiosity and desire to learn, ought to go.  Anything that develops humane perspective, curiosity, and habits of mind that allow learning to be indulged in as a pleasurable (though not effortless) activity for the rest of one’s life ought to be encouraged at all costs.  Encourage flexibility, and discourage rigidity of thought and ideology; otherwise, that great unknown future will wallop our kids when it finally shows up in a form that nobody anticipated.

 Fifth, there’s the concomitant danger of bandwagoning; of jumping onto every new idea or educational movement uncritically and for the sake of novelty itself.  Talk to any teacher who’s been teaching more than a few years, and they’ll tell you some stories about this one.  Our profession is awash in buzz-words, and though the words themselves sometimes show up in different forms, the range of ideas they represent is surprisingly limited.  Often, they’ll come back in roughly ten-year cycles, re-branded and as fresh as a bad penny (to mix a metaphor).  For a period of time in the late 1990s and up until a few years ago, one of the buzz-words you’d hear everywhere, presented as a strange hybrid of Policy, Gospel, and “Best Practice” (the latest euphemism for “toe the line”) by administrators everywhere, was the astonishingly silly phrase, “Brain-Based Learning” (is there an alternative organ that could be substituted?  It’s only a matter of time before “spleen-based learning” is all the rage).  Here’s a quick video detailing the level of skepticism we need to approach this concept with:

All of which brings me to the last point:

Finally, sixth, there’s the danger of treating the future (or your limited understanding of it) as inevitable, based on physiology.  This is an important enough topic that it deserves its own entry.  To be continued . . .

Leave a comment

Filed under Uncategorized

Things Our Grandparents Got Right #4: They didn’t try to educate us for the ‘future’.

  Part One

 This is kind of anti-intuitive.  The very process of educating children seems to rest on the idea of preparing them to meet their future.  The whole concept presupposes that the end of the process will create an educated member of society, many years down the road.  That part is fine:  of course we want to have a purpose in education, and it seems reasonable that it has something to do with kids becoming adults over time, which kind of implies the involvement of the future.  The problem comes when we start to think we know what that future will look like.

 Ever wonder why so many Science-Fiction movies set in the future are either Utopic (rare) or Dystopic (way more common)?  And have you noticed that all the fashions and hairstyles of these movies are just reflections (usually shinier, or slightly more ridiculous) of styles in vogue at the time the movie was produced?  And when the movie is set in a year that we’ve already lived through, how utterly unlike the reality of that time it is?  Further, have you noticed that these films are usually good indicators of the varieties of social angst that were current when they were made?  How many “Alien Invasion” movies from the 1950s mirror Cold-War fears of foreign infiltration and invasion? 

Who knew that those dresses would still be in style 400 years later?

It shouldn’t really be a shock to us that we can’t read the future.  What’s a lot more shocking to me is how often we act as if we can, and how infrequently we learn from being proved wrong.  Dan Gardner, in his book Future Babble, exposes the degree to which relying on experts, against all intuition to the contrary, actually renders us less able to predict and adapt to the future.

Gardner makes reference to studies that have been done over the years to try to verify the accuracy of expert predictions about the future.  This is, of course, a separate question from the amount of knowledge about a certain subject (gained from studying the past) any given expert possesses. The question is, “Does having a lot of knowledge about a particular subject increase your chance of being right when making predictions about the future of the area of study?”  Some of these studies have been conducted by the media (admittedly not very scientifically).  Here’s Gardner: 

“In 1984, The Economist asked sixteen people to make ten-year forecasts of economic growth rates, inflation rates, exchange rates, oil prices, and other staples of economic prognostication.  Four the test subjects were former finance ministers, four were chairmen of multinational companies, four were economics students at Oxford University, and four were, to use the English vernacular, London dustmen.  A decade later, The Economist reviewed the forecasts and discovered they were, on average, awful.  But some were more awful than others:  The dustmen tied the corporate chairmen for first place, while the finance ministers came last.” (p.21)

Other more recent examples have also come from the press:  If anyone remembers the famous accuracy of  Paul the Octopus, a cephalopod who was able to predict the correct outcome of all seven matches AND the final of the German team’s 2010 FIFA World Cup of soccer, they might be amused to hear of other animal ‘predictions’ that put our purported abilities to shame:  Chippy the chimpanzee embarrassed famous American pundits by choosing flashcards indicating political outcomes at a higher rate of accuracy than the experts, two months running.  In the field of meteorology, Wiarton Willie, the groundhog who predicts the onset of springtime every February second in Ontario, claims to be accurate 90% of the time on his personal website (though a larger study puts groundhog predictions in general over the last 40 years at about 39% accurate).  National weather bureaus claim about a 60% accuracy on long-range forecasts, though many think this is too high.  Certain ancient traditions of haruspicy are still being practiced; a pig farmer in North Dakota who examined the spleens of his pigs to predict the weather boasted of an 85% success rate.

None of these, of course, point to any magical powers possessed by animals.  (A better candidate for a claim of that sort is perhaps to be found in the case of the Tsunami of December 2004, in which more than 150, 000 people were killed, but relatively few animals, who anecdotally seemed to know that something was about to happen and fled).  At best, they indicate that when a series of choices is made more or less randomly, the accuracy rate is higher than when experts make them.  This is embarrassing enough, but to find out that one’s chances of being right actually decrease when one’s confidence and expertise increase is downright humbling.

Philip Tetlock, a psychologist at the University of California, conducted the largest experiment on the subject over a number of years after the spectacular failure of anybody to predict the downfall of the Berlin wall in 1989 and the subsequent collapse of the Soviet empire.  He studied 284 experts in politics, economics, and journalism, and compiled 27, 450 predictions about the future.  Conclusion:  the experts would have been beaten by a “dart-throwing chimpanzee”.  Some, however, were a lot worse than others:  these experts would have vastly improved their accuracy if they guessed randomly.  Tetlock discovered that these experts’ backgrounds or education didn’t explain their inaccuracy; instead, it was their mode of thought.  They were particularly uncomfortable with complexity and uncertainty.  They worked from an ideology and were extremely confident that it was correct.  Tetlock called these experts “hedgehogs”, after the fragment of the poem by Archilochus:  “The fox knows many things, but the hedgehog knows one big thing”.  The foxes, on the other hand (the experts who had no preconceived ideology, but worked from data, synthesising multiple sources and self-critically correcting for error as they went), did much better, and did manage to do better than just flipping a coin.   Much has been made recently about studies that appear to show differences in the tendencies of conservatives’ and liberals’ ways of thinking that mirror these broad categories:  Conservatives tend toward hedgehoginess, and liberals to vulpine leanings.

Interestingly, hedgehogs who are more ideologically extreme are even more likely to be wrong, and their accuracy actually declines when they know a lot about their subject, as well as when they predict something over a long period of time.  As Gardner puts it, the lesson is that “if you hear a hedgehog make a long-term prediction, it is almost certainly wrong.” (27)   And, of course, the problem is that we get most of our predictions from hedgehogs.  They are on TV and in the news all the time: they are confident, educated, knowledgeable experts who are willing to say bold, loud, easy-to-understand things about the future.  No media source wants to have foxes on TV; they will tend to want to say things like, “It depends,” or discuss things at length, giving a nuanced opinion.  And, in the end, though they do much better than hedgehogs, foxes are no prophets:  the world is fundamentally complex and unpredictable.  You can beat even a fox at predicting the future by predicting that “nothing will change”.  The things that are predicted are almost always wrong, (remember Y2K?  The paperless office?  The list is huge) and the things that end up happening, such as the collapse of Eastern-Block Communism, the Arab Spring, the housing crisis of 2008, and 9/11, leave pundits scrambling to rationalise all the reasons they hadn’t seen anything coming.

So the hubris of predicting things like what the “economy of the future” will be is really just an arrogance born of fear:  we want to educate our children to face what is now, has always been, and will always be, an uncertain future.  All kinds of educational imperatives have been attempted in the name of just that.  The fact remains that we simply don’t know, and are not able to know, what will drive the economic engine of our children’s future.  If we belong to that section of society that believes that the purpose of education is largely economic, then we are pretty much out of luck.  It simply can’t play that role.

In Ontario, where there is little formal attention in the curriculum given to job-specific skill sets, this is less of a problem than elsewhere.  But we can still get sucked into the “education for the future” meme in other ways.  We often talk about education like it is “for” something, in a kind of pragmatic way.  I can’t disagree; I think so too.  I just think that I don’t know what it’s for.  I’ve had ex-students come visit me ten or more years after I taught them.  They always share their memories of the classes they had with me, and it’s a rare moment when their memories match mine.  They’ll sometimes tell me that something I said in class changed their lives – my response is often unspoken, but goes something like this:   I said that?  Huh.  I don’t remember that.  Sounds profound, though.  I’m glad it helped.  Many times the things they remember weren’t part of any official curriculum.  Just some off-the-cuff remark that stuck with them and meant something eventually.  Sometimes it isn’t even anything you say:  sometimes just the long-term effect of your character on a kid will turn things around for him.  I’m always surprised by what they say meant something to them.  It’s rarely something content-related.  That’s where a little humility goes a long way:  I don’t know what is meaningful to them, or what will become so in the future.  I don’t know what part of my experience and worldview will resonate with them.  At the time, it sometimes seems like none of it is making any impact, but they tell me different, years later.  So I teach what I think is interesting, and hope for the best.

Sometimes we answer the question of “what is education for” in a too-limited manner.  Aristotle thinks of the question like this:  Why do we do anything?  Can we follow the trail of motivation to a source?  Something we do for its own sake, and not as a step to something else?  We’re goal-oriented in the West; it seems like we’re often lost without them.  We go to school, we think, because we want to get into university or college.  Why?  So that we can earn a certificate or degree.  Why?  So that we can use it to get a job.  Why?  To earn money.  Why?  To buy things with.  Why?  (And here’s where the trail usually ends in a capitalist society)  Because we think they will make us happy.  But why do we want to be happy?  For no reason.  Happiness is its own end.  We think, though, in the goal-oriented rat race of the West, that happiness is an ‘end’ in a kind of a final sense:  we think that retirement is the time in your life when all this will eventually pay off.  And so many of us end up waiting until we’re 65 to be happy.  In fact, by that point, many of us are so used to setting goals and postponing happiness that we don’t know what to do with ourselves after we leave our professions.  That’s obviously no way to live your life either.

So the future doesn’t seem to be the way to go when we think about education.  In the next post, I’ll go into why the alternatives, i.e., living in the “golden age” past of education, or else turning education into nothing more than a reinforcement of existing biases, aren’t viable options either.

Leave a comment

Filed under Uncategorized