Saturday, 24 January 2015

Literacy License


If you ever needed any convincing that Whole Language is alive, well, and thriving in Australian pre-service teacher education, then you need look no further. This week, an Australian literacy academic, Mr Ryan Spencer  left no doubt at all in readers' minds that this failed and harmful ideology continues to hold a firm grip in university lecture theatres. Oh sure, education academics will, in the main, assure us that this is not the case, but opinion pieces such as this one, boldly informing parents that they should "ditch the home readers" that their child's teacher has provided tell otherwise.

Let me affirm some points of important common ground between Whole Language advocates and its detractors: learning to read is important; children need rich immersion in and exposure to spoken and written language in order to have the necessary linguistic "toolkit" to progress from the oral to the written form; and learning to read should be enjoyable. Note I used the word "enjoyable" here, not "fun". I don't know whether anyone has specifically researched this question, but is there any actual empirical evidence that children need to be having "fun" in order to learn to read? Or do they simply need to be engaged and achieving? I ask this because it's a claim that is often made quite uncritically in education circles and I am not aware of any research that operationalises "fun" over established cognitive constructs such as attention, concentration, learning, and memory. When did "fun" attain a higher educational standing than engagement and achievement?


To quote one commentator on The Conversation discussion forum about this article, "there is evidence and there is evidence". This pretty much sums up the problem, and resonates with frustrations expressed by Dr Jennifer Buckingham in a recent opinion piece in The Australian newspaper

There has been a bewildering reluctance in education faculties to acknowledge the scientific basis for effective teaching methods. Education academics often view the idea of explicitly teaching children how to read the written word (rather than reading children lots of books and hoping for the best) as between an anachronism and a “neoliberal”, anti-public education plot.

So I find Mr Spencer's advice, delivered with the imprimateur of the title "Clinical Teaching Specialist" at a university, worrying at many levels:
  1. It dismisses, in one sweep, the value of "levelled readers" that allow children to decode text at, or slightly below/above their current level of ability;
  2. It undermines attempts by beleaguered primary school teachers to actually apply evidence in the classroom about the importance of early decoding/phonics skills. Can we possibly make things any harder for teachers with respect to the conflicting information they receive?
  3. It assumes that children find these texts "boring", with no evidence presented to this effect. What about the child who doesn't find such a text boring and enjoys the sense of growing mastery over an alphabetic code that previously seemed opaque, but is now emerging, like a ghost out of the fog, to give the beginning reader an algorithm with which to tackle new, unfamiliar words? Guessing only takes you so far, and as any struggling Grade 4 reader will tell you, it's a poor substitute for decoding skills once the pictures have gone.
  4. Statements such as "...when children are provided with the opportunity to select their own reading material, they achieve greater levels of success"are so sweeping and so general that they do not belong in academic writing at all. The citation provided to support this generalisation is in fact an opinion-based narrative review, not an empirical study, nor even a systematic review.

Not all "levelled readers" are the same, and of course they can reflect different learning-to-read philosophies. As long as children are being exposed to evidence-based early reading instruction in the classroom, (and this is not a given), I doubt that it matters a great deal what kinds of books parents and children enjoy together for pleasure, if circumstances are such that this opportunity can be created.

As Melbourne Speech Pathologist Alison Clarke has commented, however, time spent actually practising reading on the part of the child is different:

(Mr Spencer) conflates books-to-read-to-your-child, which should be chosen on the basis of interest, and books-for-your-child-to-read, which can and should be carefully selected on the basis of the complexity of their text, because we have a very complicated and opaque spelling system in English, and it is too hard for little children to learn all at once. 

Parents are natural teachers, and those who are fortunate to be able to read (something that we should not assume is universal), will in many cases default to sounding out unfamiliar words in order to assist their young learner - much closer to reading science than encouraging guessing*. Parents are to be encouraged in this natural teaching, but not at the expense of shared book time affording relaxed and warm interactions.

So my advice to parents is not to ditch home readers, but to think about their place as a simple practice-aid, to discuss their selection with the child's teacher, and to make them just a part of the evening reading ritual, not the "whole story". 

*I wonder how many parents naturally default to a three-cueing system?

(C) Pamela Snow 2015

Monday, 12 January 2015

The tricky dilemma facing Education Deans and getting rockets off the launching-pad




Media reports last week of the findings of the New South Wales Board of Studies Teaching and Educational Standards (BOSTES) audit of university teacher education curricula pertaining to the teaching of reading have again stirred the possum on the thorny issue of what Australian teachers do and don't learn in their pre-service education about optimal ways of teaching children to read. 

To say that this is an exasperating “debate” is to seriously understate the frustration of academics, policy-makers, clinicians, teachers, parents, and the media. Imagine a scenario in which the medical profession in the 1940s had been unable to agree on the role of penicillin in treating infection. Instead of decades of improved health care and subsequent cumulative scientific advances, we would be stuck in circular arguments about the merits of poultices and other folkloric treatments, the outcome of which was too frequently death, even in young, otherwise healthy patients. Medicine, of course, has to deal with the inconvenient obviousness of its failures. As I have stated before on this blog a significant difference between education and medicine is the fact that education policy-makers and practitioners are rather “quarantined” from the impact of their failures, and can even re-ascribe these to characteristics of the learner (e.g. with respect to capacity, motivation, effort, family background etc). An exception to this is the publication of PIRLS data, which then needs some explaining.

My question to Education Deans is this – if the findings of the NSW BOSTES audit are incorrect and/or are based on a flawed methodology, then how do we explain (a) teachers’ poor knowledge of the English language code (see for example Fielding-Barnsley, 2010 and Moats, 2014, and (b) the marked under-achievement of Australian children on international measures of reading progress such as PIRLS? There’s a disconnect somewhere.

This question, it seems to me, creates the kind of bind for which no amount of media training can adequately assist with the formulation of an obfuscating response. Or so I thought, until I read this piece by Stewart Riddle (University of Southern Queensland), in which we are now assured that teachers’ inability to spell does not interfere with their ability to teach reading and spelling. That’s akin to saying that the inability to use a protractor shouldn’t interfere with an architect’s ability to design a house. Knowing that Dr Riddle used a dictionary on the occasions as a teacher when he didn’t know how to spell a word is no reassurance either – what about all those times he thought he knew how to spell a word, and neither he, nor his students were any the wiser in the face of his errors? That’s like a doctor saying that there’s no need to be concerned about incorrect drug doses, because when s/he thinks the dose might be wrong, s/he checks it. On the other occasions of course, the outcome might be fatal for the poor unsuspecting patient. 

Until a couple of years ago, I was course co-ordinator on a postgraduate diploma for practising teachers. The teachers who enrolled in this program were highly motivated and committed to improving the everyday lives of at-risk and troubled students. I had a great deal of respect for some of the challenging scenarios their work threw up to them. However when it was time to assess their written work, I had to suck air in through my teeth and brace myself for constant frustration and disappointment. On average, I would say about 15% of these teachers had written skills (spelling and grammar) of a standard the community would expect of tertiary-qualified professionals. Their work stood out and was a joy to read, as I could engage with their ideas, without being distracted by sometimes less than junior secondary standard writing. About 60% had mid-range skills, characterised by homophone-based spelling errors (e.g. their/there; bear/bare; compliment/complement, etc) and basic grammatical errors such as poor subject-verb agreement, poor use of commas, and next to no understanding of when to use/not use apostrophes. This work was below the standard expected of university graduates and interfered with the transmission of ideas. The remaining one quarter or so had very poor written skills, such that the reader was pre-occupied with anticipating the next error or omission and their ideas were lost. I remember writing on one such assignment “Please make sure you proof-read and spell-check your work carefully before you submit it”, to which I received an email reply, as follows: “Sorry about the sloppy writing. My husband was away on the weekend and he usually does my proof-reading and editing”. I wonder how that same teacher would have dealt with such an excuse from one of her students?

A number of commentators (Dr Riddle included) have referred in recent times to Louisa Moats’ oft-quoted line that “Teaching reading* IS rocket science” (*in fact Dr Riddle refers in his recent piece on The Conversation to “literacy”, which of course is not the same as reading – a misapprehension that might be part of the problem). I agree fully that there is a complex science to the application of evidence in early year’s classrooms; however it is a science to which student teachers are receiving only patchy and partial exposure, and it is children who bear the life-long cost of this.

Remember too, that when launching a rocket, being inches out on the launching pad means you’ll be miles out in space.




 (C) Pamela Snow 2015
 

Thursday, 18 December 2014

Sorting the wheat from the (research) chaff. A rough guide.

A few teachers have commented to me this year that they would have no idea how to determine whether a new piece of research is “any good” or represents a change of practice that they should adopt. While it’s probable that many health science practitioners would also say they struggle with the task of critically appraising new research, I think this is a particular challenge for teachers, who historically, have not been taught about research methods and data analysis in their pre-service training. This leaves teachers vulnerable to “the next new thing” that policy-makers decide to introduce, and makes it hard for them to argue their corner with any confidence.

No blog post can adequately stand in for two, three, four or more years of research methods training, but I thought it might be helpful here to sign-post a few key points to de-mystify some of the research landscape for teachers.

Here’s my Top-10 questions to keep in mind when reading about new research:

1.      Where is the study published?
a.     The optimum answer to this  question is “In a peer reviewed journal”. By “peer review” we mean that the researchers sent their manuscript to an academic journal editor, the editor considered its general suitability for the journal, and then nominated a couple of academic “peers” to conduct a detailed review. This process is often conducted on a double-blind basis – i.e. the reviewer does not know the author’s identity and vice versa. However some journals use an open-review process. The distinction is not important for our purposes here. What matters is the level of scrutiny the paper receives, in terms of the theoretical logic behind its rationale, its method, data collection, analysis and interpretation.

 As any academic will attest, this can be a bruising process and we often need to don a metaphorical rhino hide before opening the email with a subject line “MS 2014XYZ Decision” or similar. Reviewers rarely spend a lot of time on the study’s strengths, highlighting instead its flaws and limitations. This is not a game for the faint-hearted.

The upside though is that most published papers have undergone considerable revision by the time they go to print, and the researchers may have had to patiently and painstakingly address a myriad of queries and challenges to their argument.

b.     But not all journals are created equal. Academics are in the know about esteem hierarchies and metrics such as impact factors. Universities are in the know about these as well, and bring considerable pressure to bear on academics to publish in high-impact journals. One problem with this is that such journals may only be read by other academics and never by practitioners “on the ground” – so while it’s gratifying to have your research cited by other researchers, it may not be translating into meaningful, real-life change.

c.       Media reports, blogs and websites often provide accessible, easy to read summaries of research, but should not be the primary source of research. If they are the primary source, you should remember that the rigorous peer-review process outlined above has almost certainly not taken place.

2.      Who are the authors?
a.     Is there a well-qualified academic on the research team? Particular knowledge of research methodology, data collection, analysis and interpretation is needed in order to conduct rigorous research. Look for evidence that this exists in the research team (e.g. a university-affiliated team leader).

b.     Are there potential or actual conflicts of interest for any of the team? Examples of this might be someone employed by a particular publishing house being part of a team that is evaluating an intervention in which that publishing house has a commercial stake. You should also look at the funding source(s) and ask yourself whether there might be vested interests in the data telling a particular story.

c.      What else has this team published? What do we know about their ideological stance / bias? (everyone has one!)

3.      What was the context of the study?
a.    What country was the study conducted in? You might read a fabulous report of a rigorous piece of research that was conducted in Uzbekistan, and be quite confident that it is tight and well-controlled.  But if there are significant differences between the Uzbeki educational context and your own, you might want to think carefully before adopting any recommended changes.

b.    What is the policy framework in which the study was conducted? Are there particular teaching approaches that are explicitly or implicitly associated with this setting?

c.     What are the demographic  characteristics of the sample? Here we need to think about socio-economic status (SES) factors, ethnicity, culture, religious influences, age and gender characteristics and any other wider influences on the context that might be relevant. The authors might tell you that the study was conducted in “10 schools with similar socio-economic characteristics”, but this doesn’t help you very much if you don’t know what those characteristics were – i.e. were the schools in a disadvantaged area, or were they middle or high-SES? This has important implications for the extent to which findings can be generalised beyond the study  – no matter how rigorous the study itself may have been.

4.      How clearly was the research question stated?
a.    Some studies are highly specific with respect to their purpose and this can be easy to see even from the title. Unfortunately, though, some research studies are a bit like fishing expeditions – the researchers pack their gear and head out into the wild to see what they can find. While it is absolutely appropriate for qualitative studies to take a broader sweep around “exploring and understanding” a phenomenon, you should always have a clear sense of what the researchers are examining and why.

5.      How adequate was the sample and the description of the intervention?
a.     Here we’re interested in issues like sample size (e.g. the number of teachers, students, schools etc) included. However there is no simple absolute answer to the question “How many is enough?” Sample size should, however be based on some kind of “power analysis” – a statistical consideration of the nature of the questions asked and the number of participants needed to test an hypothesis. If for example, you wanted to know about differences in vocabulary size between four year olds and eight year olds, we would expect that age would have a “big effect” and we would need a relatively smaller sample than if we were studying the differences in vocabulary between four year olds and four-and-a-half year olds – here there will be more developmental blurring between the two groups, and so to find an age effect (assuming one actually exists), a larger sample would be needed.

b.    Is there any potential bias/distortion due to sampling processes? An obvious issue in schools-based research is the (usual) requirement for parent/guardian consent. However it may be that parents from non-English speaking backgrounds cannot adequately understand the Information Sheet and Consent Form, and so decide (quite reasonably!) to not complete them and return them to the school. This will then introduce a systematic bias into the sample, and means findings can really only be generalised to other groups of similar composition.

c.     In studies that involve any kind of pre-post comparison (e.g. collection of baseline data at Time 1, an intervention phase, and collection of follow-up data at Time 2), it’s important to think about retention of participants over time, and most importantly to look at the characteristics of participants who were lost to follow-up. Often, these are from minority groups or have some other defining characteristic (e.g., frequent suspensions due to behaviour problems) that might in itself influence the Time 2 scores.

d.    If it is an intervention study, how were participants allocated to study arms (research Vs control?) Ideally this should occur via a process of randomisation, so that potentially confounding variables (e.g. ethnicity, IQ) are equally distributed across study arms and so are “cancelled out” in the analysis. In some medical research, it is possible to conduct “double blind” trials, in which neither the participants nor the researcher interacting with them is aware of who is in which group. This is harder to do in schools, for obvious reasons, but in general, you should look for evidence that the researchers did not influence the allocation of individuals or schools to one study arm or the other. 


e.   Also with intervention studies, ask yourself about the basis of the intervention. Does it have a theoretical rationale that draws on previous research, or is it just someone's idea about what might work? There's unfortunately been way too much of the latter in education. You should also ask whether the intervention was delivered as intended (so-called fidelity) and whether anything else might have happened during the intervention that could independently account for an apparent improvement in student performance.

6.      How suitable are the measures for the questions asked?
a.     If I told you that I was going to measure children’s IQs, and then proceeded to take out a tape measure and record their head circumferences, I think you would rightly howl me down for using an inappropriate measure of IQ. Fortunately, extremely poor choices such as this are not common, though it is common for researchers to select assessment tools that others consider to lack validity (accuracy) or reliability (consistency and trustworthiness). We also need to consider how current the measures are and whether they are widely known and well-regarded.

b.    Who conducted the assessments / measurements?  Just as we don’t want doctors interviewing their own patients about the acceptability of a new treatment, we don’t want teachers assessing their own students. Humans are prone to all sorts of conscious and unconscious bias, whether as the observer (see Rosenthal Effect) or as the observed (see Hawthorne Effect).

7.      How clearly are the results presented?
a.     Are all of the results presented, or just some of them?

b.    Often it’s necessary to have a good grasp of statistics to wade through this section of a paper, so don’t be put off if you don’t feel you bring the necessary background knowledge to the table. If necessary, consult with someone who is more confident with this territory, but persevere with other sections of the paper. Many academics would probably privately admit that they don't give this part of the paper the focus they should – which is a shame as it’s often the most difficult to write!

8.      When results are discussed, are a range of possibilities canvassed to account for the findings, or do the authors just stick with their original hypothesis?

a.     Unfortunately there is a well-known bias in what gets published, and findings that don’t sit well with researcher bias and/or the prevailing zeitgeist often just don’t see the light of day. Happily though, that is beginning to change, and academics and journal editors alike are a little more open to publishing findings that might be unexpected. Here we want to see a range of possibilities being canvassed, and the importance of future replication studies being noted. The language used should be appropriately cautious and circumspect, e.g., "These findings suggest....", or "Our results are consistent with the notion that .....".

9.      Are limitations acknowledged and addressed?
a.    All research has limitations, and most researchers are acutely aware of this when they submit a paper to a journal (if they weren't beforehand, the review process normally fixes that!). So you should expect that some limitations and their potential importance are considered (e.g.  small or biased sample, limited follow-up time).

10. Are implications for theory, practice, policy and/or further research stated?
a.      The purpose of research is to effect change  - in at least one of theory, practice, and policy. So the authors should present some ideas about the implications of their work (without over-reaching of course) and should make constructive suggestions as to how other researchers can advance the field even further.


This is by no means an exhaustive guide to critical appraisal and nor is it intended to be such. It is however, intended to guide the novice and instil some confidence that even without detailed statistical knowledge, you can still be an astute consumer of new research.

Remember too, that we rarely change tack on the basis of one study. Instead we rely on consistent trends in well-conducted research, to guide policy and practice – so look out for systematic reviews or meta-analyses, both of which pool findings on a particular question and synthesise the current state of the evidence.

I would recommend that all teachers bookmark the Macquarie University MUSEC Briefings page, as this open-access site provides reliable, independent assessments of a range of approaches that may or may not be well-supported by research evidence.


As in all things though, it is wise to remember that when something seems too good to be true ..... it probably is. 

"Trust me, I'm a researcher" is never enough.



(c) Pamela Snow 2014

Thursday, 11 December 2014

Dear Santa

It's been another busy year! Have I been "good"? Well, that's always open to interpretation, as this T-shirt reminds us -




However I can tell you that I've enjoyed being more active on Twitter this year than I was in the previous three years after I created a Twitter handle and had absolutely no idea why, or how I was meant to use it. Somehow I think I just assumed that creating a social media presence would mean that I was...... well..... "present", somewhere....... socially. However it wasn't like that Santa. In the early days, I didn't Tweet, no-one followed me and I made very limited inroads into working out who / which organisations I should be following.

Oh sure, Twitter was very helpful, by sending me intermittent emails, telling me that so-and-so "had tweets for me", and suggesting Twitter handles that I should follow, using clever algorithms derived from my ghost-like activity on the platform.

But you see, Santa, I just. didn't. get it. Twitter that is - why would I tweet? (Who was listening?). What would I tweet about? (I am really only interested in the inner-most thoughts and daily routines of my nearest and dearest and had seen some truly mind-boggling over-sharing via Facebook). And who would want to interact with me via this medium? (I know who I know, but I don't know who I don't know). Total confusion and bewilderment.

It turns out Santa, that what I needed was some Direct Instruction. I know, I know, that's not popular anymore, and you of all people know how fashions and fads come and go. However I was fortunate to find Professor Dorothy Bishop's helpful blogpost A gentle introduction to Twitter for the apprehensive academic which I sat down and digested in full, and then I understood enough of the why, what and how of Twitter to really get into it. 

A year on, Santa, I find myself feeling much more connected with an international community that shares my interests - though not always of course my perspectives, which is just fine, and is often quite informative in its own way. 


I don't mind people disagreeing with me, but Santa I do mind when people reject good science in favour of propping up an ideological position that entrenches disadvantage for children who are not even at the developmental starting line when they commence school. 


So Santa - I'm wondering if you could pop a copy of the latest edition of the Australian Journal of Learning Disabilities into the sacks of academics all around the world (but especially in Australia) who teach our next generation of primary school teachers? The paper entitled 

What teachers don't know and why they aren't learning it: addressing the need for content and pedagogy in teacher education by Louisa Moats is this year's must-have for all teacher educators.

Santa I think this could be a wonderful gift to the children of the world, and who knows, your job might be made easier too, because you'll receive such well-composed letters in future years! Though some of them are already pretty cute, as you'd have to agree.


Tuesday, 9 December 2014

The Speech Language Pathologist-Teacher Tango


Image source: http://www.freedigitalphotos.net/ 

Because of my interest in the language-literacy nexus and school success, I was very interested to read the following in a recent post on the Speech Language Literacy Lab blog: 

Service delivery models have changed substantially over the last few years. While small pull-out groups are appropriate in many situations, keeping students in their classrooms is more of a priority than it has been in the past. As SLPs, we are often told by administrators that we should be "pushing-in" to general education, and providing services in the classroom. 

This post called to mind some thought-provoking conversations I had with various UK colleagues during my recent visit there. Professor Courtenay Norbury (Royal Holloway, London) was lamenting the fact that Speech Language Therapists in the UK are increasingly providing a secondary consultation service to teachers and classrooms, spending very little time working 1:1 with those children whose poor language skills pose serious and imminent threats to school success, both academically and socially. Courtenay and I discussed the fact that classrooms are educational, not therapeutic environments, and teachers (or indeed teaching assistants) cannot be expected to provide the kinds of specialised, individually-tailored intervention that children with significant language difficulties need, in order for them to engage with the curriculum and form social connections with peers. If SLTs (SLPs in Australia and the USA) are operating more at Tier 1 in a Response to Intervention (RTI) framework, with occasional Tier 2 work and virtually no Tier 3 interventions, this has serious implications for (a) the extent to which such children have their educational trajectories altered, and (b) the development and maintenance of professional SLP skills in ameliorating complex expressive and receptive language disorders in young children. Courtenay observed that a consultation-only model is a sure-fire route to redundancy of the specialist knowledge and skills that sit within the SLP profession.

Some commentators have referred to SLPs providing their 1:1 services to children in "homeopathic doses" -  a charge that most clinicians would recoil from, yet it is hard to argue the other corner if a sufficient intervention frequency and intensity cannot be demonstrated. It will also be difficult to establish the efficacy of SLP interventions if they are conducted in ways that promote classic Type II errors, if not statistically, then certainly in the minds of administrators and policy makers. Remember the dinosaurs?

The other conversation that stands out for me in relation to SLPs and teachers working together on the issue of early literacy instruction is the one I had with Professor Bill Wells at the University of Sheffield. When I commented during a presentation that teachers have, in recent years, received uneven pre-service preparation on the linguistic basis of the transition to literacy, Bill rightly asked me whether I thought SLPs learn enough about how reading should be taught during their pre-service education. While I can't call on any empirical data to answer this question, my guess is that an audit of SLP curricula would reveal similar gaps and unevenness to that which has been reported in teaching curricula concerning linguistic precursors to reading.

So - if we are wanting SLPs and teachers to meet in the middle, then it really will take two to tango.

Dancing Tango

Faculties of Education need to ramp up their curricula with respect to linguistic precursors to literacy (vocab., phonemic awareness, narratives, syntactic complexity and so on) and Speech Language Pathology curricula are going to need to cover historical, epistemological, and pedagogical approaches to reading instruction. If this isn't done, too much time will be lost in trying to deal with turf issues and find a common language between professions. If it is done, however, the children who really need them might have better chances of receiving those Tier 2 and 3 services that genuinely impact on their language, academic, and social struggles.

I am absolutely all for SLPs and teachers working collaboratively at Tier 1, sharing knowledge, both of theory and of individual children. At Tiers 2 and 3, however, SLPs need to be able to offer targetted services to children whose language needs will never be met in the context of the mainstream classroom. Advocacy for this needs to come both from the education sector and from SLP/SLT peak bodies. Failure to do so will "dumb down" the skill-base of the SLP profession and will entrench developmental disadvantage for those children whose language skills are not adequate to meet the rapidly changing academic and social demands of the classroom.


Image source: http://www.freedigitalphotos.net/ 


(C) Pamela Snow 2014