The methodological ideal of experimentalists, E, is easily stated: derive a testable hypothesis, H, from a well-specified theory, T; implement experiments with a design; implicitly in the latter are auxiliary hypotheses, A, that surface in the review/discussion of completed research reports (payoffs are ‘adequate,’ Ss are ‘relevant,’ instructions, context are ‘clear,’ etc.). We want to be able to conclude, if statistical test outcomes support not-H, that T is ‘falsified.’ But this is not what we do; rather we ask if there is a flaw in the test, i.e. not-A is supported, and we do more experiments. This is good practice—much better than the statistical rhetoric of falsificationism. Undesigned social processes allow E to accumulate technical and instrumental knowledge that drive the reduction of experimental error and constitute a more coherent methodology than falsificationism.