"....is founded on neuroscience research and over 30 years of experience demonstrating that it is possible for students to strengthen the weak cognitive capacities underlying their learning dysfunctions through a program of specific cognitive exercises"?
What parent or teacher would not want children to be able to strengthen "weak cognitive capacities", particularly if this occurs via a program of "specific cognitive exercises"? This sounds a bit like the process undertaken in pathology labs, where microbes in a particular bacterial infection are isolated under a microscope, so that antibiotic sensitivities can be ascertained, and targeted drug therapy can be prescribed.
If only that was how learning worked!
The Arrowsmith program makes liberal use of the kinds of words that are designed to hook parents and teachers and get them believing that this is a rigorous, scientifically-based intervention, offering something unique over and above what can be offered in a well-organised, evidence-based classroom curriculum.
Have a look at the Arrowsmith website, and put a dollar on the table every time one of the following terms appears:
- cognitive
- neuro
- brain-based
- neuronal
- synaptic
- neural sciences
- cognitive-curricular research
- brain imaging
- targeted cognitive exercises
- neuroplasticity
You'll certainly be a lot poorer at the end of this exercise! But perhaps you're not yet convinced of a need to activate your inner sceptic? Well have a listen to the videoclip interview with Dr. Lara Boyd, discussing neuroimaging studies of children and controls undergoing the Arrowsmith program (see homepage link above). Dr Boyd describes what sounds like a rigorous scientific study into the "changed brains" of children who have undergone Arrowsmith training, compared to those who have not (the control group).
What's the problem here?
The problem is that the good folk behind the Arrowsmith program have reduced an incredibly complex (and to a large extent, poorly understood) phenomenon (human cognition and its representation in the cerebral cortex) to a highly over-simplified narrative that mums and dads can "understand". It's a narrative that has strong face appeal, and encourages those without specialist knowledge or training to make a link between supposedly "known" activity in the brain and certain learning exercises.If you were a parent of a child with learning difficulties, what would best motivate you to enrol your child in an expensive 3-4 year intervention (yes, that's right, 3-4 years)? Would it be the expectation that s/he would be performing at expected levels across the curriculum, or the reassurance (irrespective of academic outcomes), that s/he now has a thicker cerebral cortex and/or more myelinated neural pathways? I know what I'd be wanting as a parent.
To the best of my knowledge, the "evidence" in support of Arrowsmith comprises in-house research reports, conference poster-presentations, and satisfied client testimonials. Studies "in process" (as at 2014) include Effects of the Arrowsmith Program on Academic Performance: A Pilot Study - University of Calgary. One would have to wonder why a program that has existed for some 30 years, is only now at the point of collecting pilot data - yet it has been charging parents thousands of dollars over three decades, in the absence of robust, independent empirical data.
It is also notable that the first-listed publication at the link above is described as a "case study". Case studies are useful tools in elucidating the impact of a condition and also the response of a small number of individuals ("cases") to an intervention. Case studies are often an attractive marketing tool, as they are typically devoid of pesky statistics and other "dense" concepts that reduce their accessibility to lay readers.
In this case, clicking on the case study link reveals a PhD dissertation, which looked at 5 students, and concluded:
"Four of the five students experienced large and significant increases in cognitive, academic, emotional, and/or interpersonal functioning following their participation in the LDAS Arrowsmith program. One of the five students had much smaller gains in cognitive and academic functioning and experienced difficulties with emotional and interpersonal functioning following participation in the program".
So notwithstanding the very small sample size (not in itself inappropriate in
case-study methodology), we have a scenario in which 20% of the sample not only
failed to derive a benefit, but may have been adversely affected.
It should be stressed that case studies, though accepted in the scientific and academic communities as a form of evidence, are regarded as "weak" alongside other readily available, more robust methodologies that enable to us to accept or reject an intervention with far greater confidence, e.g., cross-sectional studies, randomized controlled trials, systematic reviews of well-controlled studies, etc. (See this link for a quick summary of the evidence hierarchy). Case studies are a useful starting point, and need to be followed by larger, more rigorous, and independent evaluations. Unfortunately, it does not appear that this has occurred in the case of the Arrowsmith program.
It should be stressed that case studies, though accepted in the scientific and academic communities as a form of evidence, are regarded as "weak" alongside other readily available, more robust methodologies that enable to us to accept or reject an intervention with far greater confidence, e.g., cross-sectional studies, randomized controlled trials, systematic reviews of well-controlled studies, etc. (See this link for a quick summary of the evidence hierarchy). Case studies are a useful starting point, and need to be followed by larger, more rigorous, and independent evaluations. Unfortunately, it does not appear that this has occurred in the case of the Arrowsmith program.
What needs to happen?
In order for academics in education, developmental (neuro)psychology, speech-language pathology etc to be able to give unbiased, accurate advice to parents, schools, and policy-makers, peer-reviewed research is needed that controls for the fact that children in programs such as Arrowsmith receive a great deal of intensive, 1:1 time. That in itself should result in improved knowledge and skills. Well-conducted studies that control for variables such as socio-economic status, comorbidities, and prior instructional environment are needed. It is important that gains made are tracked over time to see if they are maintained, or are an initial halo-effect. Bear in mind that when a child is significantly behind academically, interventions need to accelerate their progress relative to their typically-developing peers. Otherwise they will never catch up, let alone maintain their gains in the face of an ever-more demanding academic curriculum. We also need studies in which those carrying out outcome assessments are "blind" to which study arm a child was in. Not ensuring this opens the findings up to a range of overt and covert biases.In the absence of such empirical data that people such as myself can draw on, it is disappointing and worrying that two Victorian education sectors have succumbed to (understandable but regrettable) consumer pressure to countenance an introduction of the Arrowsmith Program in a small number of schools (at this stage in only one of the sectors as far as I am aware).
Consumers should expect providers to go to the market with already-tested, replicated, and high-level evidence before they ask people to sign-up to an expensive intervention. We don't expect cancer patients to organise their own randomised controlled trials of new treatments, so why should it be left to schools (who don't typically have the appropriate expertise on staff) to stumble around and try to work out whether an education intervention is an appropriate investment above and beyond what they are already doing
(or could be doing)?
(or could be doing)?
So - how will it be determined that the adoption of Arrowsmith mentioned above (and others that are popping up here and there) has been successful (or not)? What pre-determined criteria will be applied? Will there be any evaluations by independent and appropriately qualified researchers, or is this yet another education-intervention cul-de-sac?
Postscript, September 4, 2015: Interested readers should also check this blog post about the Arrowsmith Program by Professor Dorothy Bishop (University of Oxford).
Tweet (c) Pamela Snow 2015