back to

Seminar: 8/28 - Roger Levy

11:00 to 12:30 PM      at:  Tolman 5101

Department of Linguistics
University of California at San Diego
"Uncertain Input in Rational Human Sentence Comprehension."

Considering the adversity of the conditions under which linguistic communication takes place in everyday life---ambiguity of the signal, environmental competition for our attention, speaker error, our limited memory, and so forth---it is perhaps remarkable that we are as successful at it as we are.  Perhaps the leading explanation of this success is that (a) the linguistic signal is redundant, (b) diverse information sources are generally available that can help us obtain infer the intended message (or something close enough) when comprehending an utterance, and (c) we use these diverse information sources very quickly and to the fullest extent possible.  This explanation can be thought of as treating language comprehension as a rational, evidential process.  Nevertheless, there are number of prominent phenomena reported in the sentence processing literature that remain clear puzzles for the rational approach.  In this talk I address two such phenomena, whose common underlying thread is an apparent failure to use information available in a sentence appropriately in global or incremental inferences about the correct interpretation of a sentence.  I argue that the apparent puzzle posed by these phenomena for models of rational sentence comprehension may derive from the failure of existing models to appropriately account for the environmental and cognitive constraints---in this case, the inherent uncertainty of perceptual input, and humans' ability to compensate for it---under which comprehension takes place.  I present a new probabilistic model of language comprehension under uncertain input and show that this model leads to solutions to the above puzzles.  I also present behavioral data in support of novel predictions made by the model. More generally, I suggest that appropriately accounting for environmental and cognitive constraints in probabilistic models can lead to a more nuanced and ultimately more satisfactory picture of key aspects of human cognition.