Oddbean new post about | logout
 Been wondering - is there a reason it’s apparently so rare to present power calculations for a meta analysis? E.g. ‘with the included studies there is a probability X we would be able to detect a true effect of size Y or more’.

There are lots examples of COVID papers that collate a bunch of underpowered studies then conclude ‘no significant effect’ (or worse ‘no effect’) after a meta analysis, despite a point estimate that suggests intervention may be useful at scale… 
 It seems odd (to me at least) to care about a priori effect size and power before a study, but not at all before a systematic review/meta analysis of studies.

There’s literature out there on this topic, e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5590730/, but doesn’t seem to be part of routine reporting? 
 @0fabd563 Occasionally the purpose of a meta analysis is to hide any signal amongst the noise.