A Robust Effect Size Index
- PDF / 481,221 Bytes
- 15 Pages / 547.087 x 737.008 pts Page_size
- 8 Downloads / 220 Views
A ROBUST EFFECT SIZE INDEX
Simon Vandekar , Ran Tao and Jeffrey Blume VANDERBILT UNIVERSITY
Effect size indices are useful tools in study design and reporting because they are unitless measures of association strength that do not depend on sample size. Existing effect size indices are developed for particular parametric models or population parameters. Here, we propose a robust effect size index based on M-estimators. This approach yields an index that is very generalizable because it is unitless across a wide range of models. We demonstrate that the new index is a function of Cohen’s d, R 2 , and standardized log odds ratio when each of the parametric models is correctly specified. We show that existing effect size estimators are biased when the parametric models are incorrect (e.g., under unknown heteroskedasticity). We provide simple formulas to compute power and sample size and use simulations to assess the bias and standard error of the effect size estimator in finite samples. Because the new index is invariant across models, it has the potential to make communication and comprehension of effect size uniform across the behavioral sciences. Key words: M-estimator, Cohen’s d, R square, standardized log odds, semiparametric.
1. Introduction Effect sizes are unitless indices quantifying the association strength between dependent and independent variables. These indices are critical in study design when estimates of power are desired, but the exact scale of new measurement is unknown (Cohen, 1988), and in meta-analysis, where results are compiled across studies with measurements taken on different scales or outcomes modeled differently (Chinn, 2000; Morris and DeShon, 2002). With increasing skepticism of significance testing approaches (Trafimow and Earp, 2017; Wasserstein and Lazar, 2016; Harshman et al., 2016; Wasserstein et al., 2019), effect size indices are valuable in study reporting (Fritz et al., 2012) because they are minimally affected by sample size. Effect sizes are also important in large open source datasets because inference procedures are not designed to estimate error rates of a single dataset that is used to address many different questions across tens to hundreds of studies. While effect sizes can have similar bias to p-values when choosing among multiple hypotheses, obtaining effect size estimates for parameters specified a priori may be more useful to guide future studies than hypothesis testing because, in large datasets, p-values can be small for clinically meaningless effect sizes. There is extensive literature in the behavioral and psychological sciences describing effect size indices and conversion formulas between different indices (see e.g., Cohen, 1988; Borenstein et al., 2009; Hedges and Olkin, 1985; Ferguson, 2009; Rosenthal, 1994; Long and Freese, 2006). Cohen 1988 defined at least eight effect size indices for different models, different types of dependent and independent variables, and provided formulas to convert between the indices. For example, Cohen’s d is defined for mean d
Data Loading...