Member Login

Institute of Cognitive and Brain Sciences

University of California at Berkeley
3210 Tolman Hall MC 1650
Berkeley, CA 94720-1650

Administration support for the Institute is provided by the staff of the Helen Wills Neuroscience Institute. See the administration page for help and information.


All talks are in 5101 Tolman Hall, 11am-12:30pm.

September 4

Minds, Brains, and Cookies Social

Come talk about your recent research and eat cookies!

September 11

Presenter: Anca Dragan, UC Berkeley

Title: Robots that reason about people

Abstract: The goal of my research is to enable robots to work with, around, and in support of people, autonomously producing behavior that reasons about both their function and their interaction with humans. I aim to develop a formal understanding of interaction that leads to algorithms which are informed by mathematical models of how humans interact with robots, enabling generalization across robot morphologies and interaction modalities. In this talk, I will focus on one specific instance of this agenda: autonomously generating motion for coordination during human-robot collaborative manipulation. Most motion in robotics is solely functional: industrial robots move to package parts, vacuuming robots move to suck dust, and personal robots move to clean up a dirty table. This type of motion is ideal when the robot is performing a task in isolation. Collaboration, however, does not happen in isolation, and demands that we move beyond solely functional motion. In collaboration, the robot's motion has an observer, watching and interpreting the motion - inferring the robot's intent from the motion, and anticipating the robot's motion based on its intent. My work integrates a mathematical model of these inferences into motion planning, so that the robot can generate motion that matches people's expectations and clearly conveys its intent. In doing so, I draw on action interpretation theory, Bayesian inference, constrained trajectory optimization, and interactive learning. The resulting motion not only leads to more efficient collaboration, but also increases the fluency of the interaction as defined through both objective and subjective measures.

October 2

Presenter: Alexei Efros, UC Berkeley

Title: Visual understanding without naming

Abstract: Most modern visual understanding approaches rely on supervision by word labels to achieve their impressive performance. But there are many more things in our visual world than we have words to describe them with. Using words as supervisory signal risks missing out on much of this visual subtlety. In this talk, I will describe some of our recent efforts to bypass this "language bottleneck" and instead use information that is already in the data, such as context and visual consistency, to help in visual understanding, visual correspondence, and image retrieval.

October 30

Presenter: Florian Jaeger, University of Rochester

Title: Processing, Communication, and Cross-linguistic Generalizations

Abstract: I'll focus on a long-standing question in the language sciences - the origin of cross-linguistics generalizations (specifically, "statistical universals"). I present efforts to test whether some of these generalizations can be derived from how language is used to communicate. Case study 1 asks whether actual natural languages have syntactic properties that increase processing efficiency (Gildea & Jaeger, submitted). We focus on two properties well known to affect processing efficiency (processing time/word), dependency length and surprisal. Using data from five large syntactically annotated corpora, we find that natural languages have lower information density and shorter dependency lengths than expected by chance. Such findings are intriguing (see also Gildea & Temperley, 2010; Merlo, 2015; Futrell et al., 2015) but are based on correlations, rather than directed tests of the hypothesized causes (e.g., biases during language production or acquisition leading to deviations from the input, which then accumulate over generations to create statistical patterns across languages). In other lines of research, my lab has been investigating whether pressures of language use can indeed cause language users to change the statistics of their output, e.g., by making their productions more efficient for message transfer. This would provide the beginning for a plausible causal chain that underlies (some) cross-linguistic generalizations. Case studies 2-4 employ a miniature language learning approach to this question (Fedzechkina et al., 2011, 2013, under review; Fedzechkina & Jaeger, 2015). I present initial evidence that the miniature languages spoken by learners after successful acquisition indeed have higher processing efficiency (e.g., shorter dependency lengths) and communicative efficiency (effort/bits uncertainty reduction) than the input languages presented to them. I'll discuss why these changes cannot be reduced to biases inherited from previously learned languages, suggesting that they are more abstract. Finally, time permitting, I present Case study 5 (Buz, Tanenhaus, & Jaeger, 2014, under review), which investigates whether similar biases continuously operate even during language production in adult native speakers.

November 13

Presenter: Liane Young, Boston College

Title: The structure of morality

Abstract: The capacity to process mental states like beliefs and intentions, theory of mind (ToM), is crucial for moral judgment (e.g., distinguishing murder from manslaughter). In this talk, we'll look at the role of ToM not just for moral judgment but also for moral behavior across distinct social contexts (e.g., cooperation vs competition) as well as for distinguishing moral propositions from non-moral propositions (i.e., facts, preferences). We will use the approach of looking at the role of ToM to investigate the structure of morality - to test claims about distinct moral domains, distinct moral motivations, and distinct features of moral versus non-moral processing. The talk will include neural evidence as well as behavioral evidence from adults and children.

November 20

Presenter: Melissa Koenig, University of Minnesota

Title: Characterizing two routes to testimonial knowledge: Sources of protection and vulnerability

Abstract: Much of what we know we learn from others. Learning from testimony involves making at least two kinds of estimates about speakers: estimates of their knowledge and estimates of responsibility. In this talk, I will discuss two routes taken by testimonial learners, and specify the protective mechanisms, as well as the vulnerabilities that characterize these two routes. On the evidential route, children showcase three types of protective mechanisms that help to buffer against the risks of misinformation. First, children detect error or conflicts of information of various kinds, leading them to spontaneously scrutinize sources. Second, children show enhanced memory for negatively-marked sources. Third, when children have no privileged knowledge about the speaker, they make epistemic inferences based on subtle properties of the testimony, in light of what they know about the domain. I'll end by arguing that children's evidential reasoning leaves mysterious why they fail to anticipate overtly deceptive bids, and I'll suggest that children's difficulty with deception derives not from evidential, conceptual, nor epistemic limitations, but from interpersonal judgments of responsibility. Findings will be discussed in relation to categories of protection that are shared with adults, as well as standing questions about how to think about children's trust in testimony

December 4

Presenter: Sharon Thompson-Schill, University of Pennsylvalia

Title: Conceptual Integration

Abstract: For the past two decades, I (and many of my colleagues in the fields of cognitive psychology, neuropsychology, and neuroscience) have been trying to understand the cognitive and neural structure of long-term memory for concepts by "taking the concepts apart". This had led to a very feature-centric approach to concepts (i.e., a lime is green, round, and tart; a carrot is orange, tubular, and sweet). In recent years my group has begun to attempt to "put concepts back together", specifically, by focusing on the integration of features into concepts and the integration of simple concepts into complex concepts. In this seminar, I will present some new fMRI research demonstrating our approach to understanding conceptual integration. To begin, I will illustrate how we can measure both segregation and integration of two visual features (shape and color), and I will briefly comment on the role of feature diagnosticity in their integration. Next, I will report a new finding regarding integration of an abstract feature (value) with a visual feature (shape). And finally, I will discuss our recent foray into the study of conceptual combination, and some preliminary evidence for the functional specialization of two anatomically-distinct conceptual hubs. If there is time, I will also touch on related work concerning fast-mapping of new concepts and metaphor comprehension, both of which draw on our ideas about conceptual integration.