In this study, we investigate Hungarian Plain Language (PL) and Simple Language (SL) with the primary objective of training a machine-learning-based sentence-level PL model that flags sentences where expert intervention may be needed during PL-oriented rewriting. The analysis uses a legal-administrative PL corpus and a news-based SL corpus, currently the only publicly available high-quality Hungarian resources for PL and SL. In low-resource settings, PL data are typically scarce, so selective data augmentation is a natural candidate for improving model performance. Our aims are threefold: (i) to provide a feature-based descriptive comparison of these Text Simplification resources; (ii) to test whether selectively chosen SL sentences can augment PL training data; and (iii) to evaluate the impact of such augmentation on sentence-level PL detection. Methodologically, we extract handcrafted linguistic features spanning surface, morphosyntactic and discourse properties. We derive a PL-likeness score from logistic-regression coefficients and use it to select SL sentences most similar to PL for augmentation, followed by supervised sentence-level PL detection with XLM-RoBERTa-large. Results show clear differences between PL and SL in sentence length, lexical diversity, syntactic depth and connective use. Selective inclusion of SL sentences yields modest gains in constrained settings, whereas indiscriminate mixing reduces precision and reliability.