Phonotactic constraints, frequency, and legality in English onset-cluster perception. When a speech signal is acoustically ambiguous between two parses, perception is biased in favor of native-language sound patterns. What determines the size of the bias? We consider factors arising from (1) the sound pattern of the language (frequency of each parse, or phonotactic constraints?) and (2) the link between the sound pattern and perceptual response (does bias depend on the relative or on the absolute badness of the two parses?). The experiment exploited the badness of same-place stop-sonorant clusters (e.g., [dl]) in English syllable onsets. Stimuli were synthetic [CCae] syllables in which the second C is ambiguous between [l] and [w]. In one condition, the first C was ambiguous between [b] and [d], so that two of the four parses were attested English onsets ([blae dwae]; in the other, between [m] and [n], so that none were Bias was quantified as the effect of the stop decision on the sonorant decision. Results showed bias in the b/d condition, but little or none in the m/n condition. This, together with previous findings, favors a model in which (1) the sound pattern is represented by phonotactic constraints, and (2) bias depends on absolute rather than relative phonotactic badness.