Institute of Cognitive and Brain Sciences
University of California at Berkeley
3210 Tolman Hall MC 1650
Berkeley, CA 94720-1650
Administration support for the Institute is provided by the staff of the Helen Wills Neuroscience Institute. See the administration page for help and information.
All talks are in 5101 Tolman Hall, 11am-12:30pm.
Presenter: Gary Lupyan, University of Wisconsin, Madison
Title: Beyond the mapping metaphor: the role of words in human cognition
Abstract: A common assumption in psychology and linguistics is that words map onto pre-existing meanings. I will argue that this mapping metaphor is mistaken and that words play a much more central role in creating meaning than is generally acknowledged. I will present a range of empirical evidence for the functions of language beyond communication, focusing on categorization and visual perception. On the presented view, many of the unique aspects of human cognition stem from the power of words to create categories from perceptual representations, allowing language to act as a high-level control system for the mind.
Minds, brains, and cookies social
Come talk about your recent research over coffee and cookies!
Presenter: Amanda Woodward, University of Chicago
Presenter: Rebecca Spencer, University of Massachusetts, Amherst
Title: Developmental and aging-related changes in sleep's role in cognition
Abstract: Sleep contributes to cognitive function. For instance, memories are consolidated and selective attention and emotion regulation are enhanced with overnight sleep and mid-day naps. Our work has considered the ramifications of the cognitive functions of sleep on development and aging. Does increased sleep, via daytime naps, enhance cognition during early development? Conversely, do age-related reductions in sleep contribute to the known age-related declines in memory? These questions will be answered in the context of a preliminary model of the evolution of a memory.
Presenter: Sergey Levine, University of California, Berkeley
Title: Deep Robotic Learning
Abstract: The problem of building an autonomous robot has traditionally been viewed as one of integration: connecting together modular components, each one designed to handle some portion of the perception and decision making process. For example, a vision system might be connected to a planner that might in turn provide commands to a low-level controller that drives the robot's motors. In this talk, I will discuss how ideas from deep learning can allow us to build robotic control mechanisms that combine both perception and control into a single system. This system can then be trained end-to-end on the task at hand. I will show how this end-to-end approach actually simplifies the perception and control problems, by allowing the perception and control mechanisms to adapt to one another and to the task. I will also present some recent work on scaling up deep robotic learning on a cluster consisting of multiple robotic arms, and demonstrate results for learning grasping strategies that involve continuous feedback and hand-eye coordination using deep convolutional neural networks.
Presenter: Justin Wood, University of Southern California
Title: Building newborn minds in virtual worlds
How do newborns learn to see and understand the world? Although philosophers and psychologists have debated the origins of the mind for centuries, two major barriers have hindered progress. First, human infants cannot be raised in strictly controlled environments from birth, so it has not been possible to examine how specific experiences shape the newborn mind. Second, infants cannot be observed continuously from the onset of vision, so it has not been possible to measure the development of visual cognition with high precision. To overcome these limitations, my lab developed an automated controlled-rearing method that can be used to measure a newborn animal's behavior continuously (24/7) within strictly controlled virtual environments. With this method, we have started constructing a large-scale input-output map, which reveals how specific sensory inputs relate to specific behavioral outputs in a newborn animal. In this talk, I will focus on our work examining how object recognition emerges in the newborn brain. Further, I will show how controlled-rearing data can be linked to models of visual cortex for characterizing the computations underlying newborn vision. I will argue that controlled rearing can serve as a critical tool for testing between different theories and models, for the developmental psychology and computational neuroscience communities.
Presenter: Suzanne Stevenson, University of Toronto
Title: How Languages Carve Up the World: Modeling Developmental and Linguistic Relativity Effects
Abstract: Languages vary in how they structure the terms for a semantic domain, such as colors or spatial relations. For example, in English we say "the cup is on the table", "the ring is on the finger", and "the painting is on the wall", while Dutch speakers use a different preposition in each situation, and other languages use one preposition for the first two and a different one for the third. This kind of crosslinguistic variation raises important cognitive questions: Are all such lexical semantic systems equally easy to learn, and if not, what factors are at play? Does acquiring a particular system influence other parts of cognition - a position known as linguistic relativity? We study these issues using a computational cognitive model of word learning. We show that a novel vector-based meaning representation - based on crosslinguistic data over a domain - can be used to approximate a "universal" semantic space that captures cognitive biases. This approach to semantic representation can provide an explanation for both the developmental trajectory of words in a domain and subsequent behavior on a non-verbal task in the domain.
Presenter: Amitai Shenhav
Title: The costs of choice and the value of control
Abstract: Mechanisms for cognitive control and value-based decision-making have traditionally been studied by largely separate bodies of research, but recent work has increasingly sought to interrogate the intersection between these two. I will discuss a series of studies aimed at examining questions that arise at this intersection, beginning with two sets of studies that explore the cognitive effort costs we associate with the act of making a choice: one set of studies examines the neural circuits that drive simultaneously positive and aversive experiences of being offered multiple good options (e.g., great graduate schools to attend); the other set of studies explores the costs of considering alternatives to our default (i.e., prepotently biased) option or to our ongoing task. I will then describe a recent theoretical framework and ongoing modeling work that seeks to address another critical question at this intersection: how we weigh the costs and benefits of control itself; that is, how we determine how much and what kind of cognitive effort is worth exerting. In seeking to account for the key determinants of control allocation, this Expected Value of Control (EVC) framework aligns well with recent work on the topic of rational metareasoning and its application to the selection of cognitive strategies, while also offering a coherent account of the (much-debated) functional role of the dorsal anterior cingulate cortex across research on evaluation, motivation, and cognitive control.