The use of observational methodology has become increasingly more common in psychological research, highlighting the need for tools that ensure methodological rigor. This study presents evidence of convergent/discriminant validity for the Methodological Quality Scale for Studies Based on Observational Methodology (MQSOM). A multitrait-multimethod (MTMM) analysis with Spearman’s correlations was used to examine the relationship between MQSOM dimensions and those of three instruments: the Methodological Rigor in Mixed Methods (MRMM), the Guidelines for Reporting Evaluations Based on Observational Methodology (GREOM), and the Mixed Methods Appraisal Tool (MMAT). Ninety-six articles were coded using MQSOM and the instruments for comparison. The MQSOM’s design converged with the MRMM’s mixed-methods design (ρ = .217, p = .034), GREOM’s design (ρ = .217, p = .034), and MMAT’s qualitative (QUAL) component (ρ = .212, p = .038). The MQSOM’s measurement and analysis aligned with MRMM’s data analysis (ρ = .611, p < .001), GREOM’s data quality control (ρ = .423, p < .001) and results (ρ = .328, p = .001), and MMAT’s quantitative (QUANT) (ρ = .214, p = .037) and mixed-methods (ρ = .643, p < .001) components. MQSOM’s design exhibited discriminant validity from MRMM’s data collection (ρ = .025, p = .807) and data analysis (ρ = −.051, p = .620), GREOM’s data quality control (ρ = .025, p = .812) and results (ρ = −.032, p = .759), and MMAT’s QUANT component (ρ = −.035, p = .733). This study reinforces the validity of MQSOM as a useful methodological quality scale.