di: Hattie's comparison of effect sizes (Bonnie Grossen)

Bonnie Grossen bgrossen at uoregon.edu
Tue Oct 29 09:58:22 PDT 2019


You’re right Chris, many of those influences are in DI. The thing that got me going was that my Australian colleagues were arguing about metacognitive strategies. They have a ranking as high as DI so it was said, the DI folks should be teaching them. However, the research on metacognition is mostly correlations based on interviews of good and bad readers. Good readers did some things that people labeled metacognition more than poor readers did. However, when you try to teach it, comparison studies show no effect on comprehension for teaching metacognition. In fact, in one study, the control group scored higher. Only one study I ran across found a positive effect but it compared reading instruction using metacognitive strategies to no instruction—hardly a fair comparison. Then when you want to teach it, what is it really?  This is what a metacognition advocate said: “There is no one way to teach students Metacognition. Every teacher will have to find their own style and one that serves the needs of each class individually.” (Sorry, I don’t have time to look up where that quote came from.)
You can’t identify something good readers do that poor readers don’t do, and say that if you teach that something you’ll have a good reader. You can’t take an effect size of the difference between those good readers and poor readers and compare it to the difference in the results of two treatments. 
  
Actually, there is a book being used in Australia called “Teaching Comprehension Strategies: A Metacognitive Approach,” which is not bad. There is nothing metacognitive about it though. The strategies are: understanding words using context, finding information by using a key word, identify the main idea using the title, sequencing, finding information using a key word in the question to search, predicting based on a pattern of behavior in a character. All these things are taught in DI. They are very concrete strategies, not metacognitive at all. Putting the word in the title of the book though, sure made it sexy.



Sent from Mail for Windows 10

From: Christopher Duss
Sent: Tuesday, October 29, 2019 9:33 AM
To: di at lists.uoregon.edu
Subject: Re: di: Hattie's comparison of effect sizes (Bonnie Grossen)

My first thought on reading Hattie’s 2018 list was, is the result really that bad? Direct Instruction (and was this for DI or lower case direct instruction?) ended up around 40 out of 250 - sounds decent to me. But then a lot of higher ranking components are core components of DI - phonics instruction, scaffolding, comprehensive instructional programs for teacher etc. Many of the influences aren’t standalone approaches - for example, the highest ranking one, “collective teacher efficacy”, which from what I can see is a term invented by Hattie. By the definition, “collective belief of teachers in their ability to positively affect students”, this could easily apply to DI programs. What I can see from this research is that out of the standalone teaching methods (student centered, whole language, project-based, DI, etc.), DI scores one of the highest or the highest. I’m not sure if full programs have been developed around “Piagetian programs”, “Jigsaw method” etc. Combine that with some of the other top influences that are core parts of DI and you have a powerful teaching method even by this data. 

I prefer to see this information as: DI is very good, but can we incorporate other influences to make it better? How about an added emphasis on collective teacher efficacy in teacher instructional materials and training? How about video lesson reviews (#13)? I’m working on something in the latter that I’ll explain in a separate message. 

Chris

> On Oct 23, 2019, at 02:10, di-request at lists.uoregon.edu wrote:
> 
> Send di mailing list submissions to
> di at lists.uoregon.edu
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists-prod.uoregon.edu/mailman/listinfo/di
> or, via email, send a message with subject or body 'help' to
> di-request at lists.uoregon.edu
> 
> You can reach the person managing the list at
> di-owner at lists.uoregon.edu
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of di digest..."
> 
> 
> Today's Topics:
> 
> 1.  Hattie's comparison of effect sizes (Bonnie Grossen)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Mon, 21 Oct 2019 16:54:53 -0700
> From: Bonnie Grossen <bgrossen at uoregon.edu>
> To: "shiraz1 at iprimus.com.au" <shiraz1 at iprimus.com.au>,
> "DI at lists.uoregon.edu" <DI at lists.uoregon.edu>
> Subject: di: Hattie's comparison of effect sizes
> Message-ID: <201910212354.x9LNslN5001093 at smtp.uoregon.edu>
> Content-Type: text/plain; charset="utf-8"
> 
> 
> Kerry,
> I?m late on this, but my attention just came to Hattie?s Visible Learning where he compares 252 ?influences? in education and ranks them by Effect Size. Here?s the link:
> https://urldefense.com/v3/__https://visible-learning.org/hattie-ranking-influences-effect-sizes-learning-achievement/__;!5W9E9PnL_ac!UrAvKAiEoiBVDV2xZri_J2Byvh5vN9DssdZOwzez1Cv8fZCrCOqaA5ln9NnJla6SjQ$ 
> 
> My DI friends in Australia are chagrined that DI came out relatively low in the stack. I did a little searching and researching and came up with this analysis. I would like to know what you think of it, and if you have anything to add, or see any problems with my thinking.
> 
> I was able to read the first two chapters of Hattie's book, "Visible Learning: A Synthesis of over 800 Meta-Analyses Relating to Achievement" (2008 or 2011 or 2017, not sure). He makes it clear that he is not using Effect Size to mean the size of the difference between a treatment and a control group. In fact, very few comparison studies are included in his analyses: ?The wars as to what counts as evidence for causation are raging as never before. Some have argued that the only legitimate support for causal claims can come from randomized control trials (RCTs, trials in which subjects are allocated to an experimental or a control group according to a strictly random procedure.). There are few such studies among the many outlined in this book."
> I would agree that causal conclusions, e.g., DI causes higher achievement, can only be made from studies comparing the effects of two treatments: the one being studied and a reasonable alternative instructional model. 
> Hattie describes 3 types of Effect Sizes that he has calculated in this massive meta-analysis. He then compares the ESs without regard for type. I find that very misleading rather than informative. Here are the three:
> The first two:
> ?Statistically, an effect size can be calculated in two major ways:
> Effect size = [Mean treatment ? Mean control] / SD
> Or 
> Effect size = [Mean end of treatment ? Mean beginning of treatment] / SD?
> So he uses a traditional ES calculation, comparing the difference between two treatments AND he calculates effect size as a difference in the pre and posttest scores, with no comparison group: 
> An Effect size for growth without a comparison group is an entirely different metric. Effect sizes of differences in the performance of comparison groups are likely to be so much smaller than an ES calculated on the growth from pre- to posttest. Both groups could grow enormously and have only a small difference. In addition, did he control for time. Certainly, instruction over a year is going to show more growth than a two-week intervention. (Many DI studies are only two-weeks long because that is all it took to get a significant difference.) Using ES for growth to say something contributed to that growth is completely unacceptable in the scientific method. Growth happens with time. You can't measure the additive value that a teaching strategy might have if you do just pre- and posttest. It?s basic logic.
> And you certainly can?t mix Effect Sizes for group comparisons with simply growth over time (from pre to post) if you want to show the relative power of one teaching model over another. 
> Here?s the real digression from scientific thinking. Hattie includes a 3rd way to calculate ES from correlation studies: 
> 
> I don't see the math involved in calculating the ES from a correlation. The example of the different heights of women and men is comparable to the common design of studies of metacognition: the better readers reported using metacognitive strategies more than the poor readers did. This is ridiculous.
> Even though Hattie agrees that causal claims cannot be made from correlation studies, he still includes mostly correlational studies in his meta-analysis:
> ?Throughout this book, many correlates will be presented, as most meta-analyses seek such correlates of enhanced student achievement. A major aim is to weave a story from these data that has some convincing power and some coherence, although there is no claim to make these ?beyond reasonable doubt?.?
> If Hattie wants to use three different types of Effect Sizes, he at least has to put them into three different categories, three different lists. It is completely inappropriate and very unscientific to compare all these ESs with each other, calling them all ?Cohen?s D?. 
> Hattie received a lot of recognition for his earlier analyses of Effect Size, comparing the results of many comparison studies. I suspect he came under pressure from folks like Allington, Marzano, Goodman, those guys that are highly invested in educational nonsense. I know that when I managed to write something that people read and respected, I got a lot of hate mail. Hattie has to be a big boy and not succumb to their constant antagonism. He has to do what?s right.
> So tell me what you think, Kerry?  I only got the first two chapters of his book online. Chapter 3 might provide something important that I am missing. But I don?t see 3 buckets of ES analysis, to match his 3 definitions of Effect Size.
> Did I miss something? If you agree with me, I?m going to write to him and tell him what I think. I need confirmation because I can?t believe he could be so stupid. Up until now, I thought he was such a clear thinker and admired him. 
> Thanks for all the informative postings you do for us on this website.
> Warm regards,
> Bonnie
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists-prod.uoregon.edu/pipermail/di/attachments/20191021/ee9bb586/attachment.html>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: 5C443912350A438CAE3E4A5FA39BC0F3.png
> Type: image/png
> Size: 178642 bytes
> Desc: not available
> URL: <http://lists-prod.uoregon.edu/pipermail/di/attachments/20191021/ee9bb586/attachment.png>
> 
> End of di Digest, Vol 89, Issue 7
> *********************************

_______________________________________________
di mailing list
di at lists.uoregon.edu
https://lists-prod.uoregon.edu/mailman/listinfo/di


-- 
This email has been checked for viruses by AVG.
https://www.avg.com


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists-prod.uoregon.edu/pipermail/di/attachments/20191029/bf6446e7/attachment-0001.html>


More information about the di mailing list