Recent advancements in language technology have opened new avenues in political science for automating and improving survey data analysis across diverse cultural contexts. This article examines the effectiveness of language models (LMs) in analyzing open-ended survey responses about democracy from ten countries, contrasting these modern tools with traditional survey methodologies. Utilizing a predefined coding scheme and a subset of pre-annotated survey data, it assesses the performance of fine-tuning pre-trained LMs in a multilingual setting to classify text spans. The findings suggest that LMs can capture democratic perceptions and handle data abstractions at levels comparable to human annotators. This study not only highlights the potential of LMs to transform political science research by augmenting traditional methods but also discusses the practical applications of pre-trained LMs in classifying complex survey responses, in collaboration with human annotators.