Bilinguals are known to perform worse than monolinguals on speech-in-noise tests. However, the mechanisms underlying this difference are unclear. By varying the amount of linguistic information available in the target stimulus across five auditory-perception-in-noise tasks, we tested if differences in language-independent (sensory/cognitive) or language-dependent (extracting linguistic meaning) processing could account for this disadvantage. We hypothesized that language-dependent processing differences underlie the bilingual disadvantage and predicted that it would manifest on perception-in-noise tasks that use linguistic stimuli. We found that performance differences between bilinguals and monolinguals varied with the linguistic processing demands of each task: early, high-proficiency, Spanish–English bilingual adolescents performed worse than English monolingual adolescents when perceiving sentences, similarly when perceiving words, and better when perceiving tones in noise. This pattern suggests that bottlenecks in language-dependent processing underlie the bilingual disadvantage while language-independent perception-in-noise processes are enhanced.