We always treat fluent language as a marker of intelligence and trustworthiness, often independent of factual accuracy. Large language models (LLMs) exploit this bias by producing confident, human-like texts that are perceived as intelligent and trustworthy, even when they lack accurate contextual understanding or are factually incorrect. This creates particular risks in mental healthcare, where communication, trust and context are central, and where errors are difficult to detect but highly consequential. This article examines how linguistic fluency shapes judgement, how LLMs amplify these effects and why their use in mental healthcare poses ethical and clinical dangers. It argues for strict limits on deployment, restricting LLMs to supervised, assistive tasks rather than clinical judgement.