Rejecting situational specificity (SS) in meta-analysis requires assuming that residual variance in observed correlations is due to uncorrected artifacts (e.g., calculation errors). To test that assumption, 741 aggregations from 24 meta-analytic articles representing seven industrial and organizational (I-O) psychology domains (e.g., cognitive ability, job interviews) were coded for moderator subgroup specificity. In support of SS, increasing subgroup specificity yields lower mean residual variance per domain, averaging a 73.1% drop. Precision in mean rho (i.e., low SD(rho)) adequate to permit generalizability is typically reached at SS levels high enough to challenge generalizability inferences (hence, the “myth of generalizability”). Further, and somewhat paradoxically, decreasing K with increasing precision undermines certainty in mean r and Var(r) as meta-analytic starting points. In support of the noted concerns, only 4.6% of the 741 aggregations met defensibly rigorous generalizability standards. Four key questions guiding generalizability inferences are identified in advancing meta-analysis as a knowledge source.