Estimation in exploratory factor analysis often yields estimates on the boundary of the parameter space. Such occurrences, called Heywood cases, are characterized by non-positive variance estimates and can cause numerical instability, convergence failures, and misleading inferences. We derive sufficient conditions on the model and a penalty to the log-likelihood function that guarantee the existence of maximum penalized likelihood estimates in the interior of the parameter space, and that the corresponding estimators possess desirable asymptotic properties expected by the maximum likelihood estimator, namely, consistency and asymptotic normality. Consistency and asymptotic normality follow when penalization is soft enough, in a way that adapts to the information accumulation about the model parameters. We formally show, for the first time, that the penalties of Akaike (1987, Psychometrika, 52, 317–332) and Hirose et al. (2011, Journal of Data Science, 9, 243–259) to the log-likelihood of the normal linear factor model satisfy the conditions for existence, and, hence, deal with Heywood cases. Their vanilla versions, though, can result in questionable finite-sample properties in estimation, inference, and model selection. Our maximum softly-penalized likelihood (MSPL) framework ensures that the resulting estimation and inference procedures are asymptotically optimal. Through comprehensive simulation studies and real data analyses, we illustrate the desirable finite-sample properties of the MSPL estimators.