The Institute for Neural Computation (INC) and the UCSD Division of the California Institute for Telecommunications and Information Technology (Calit2) are launching a lecture series focused on "The Operating System of the Brain.". Coordinating the Calit2/INC Lecture Series is Javier R. Movellan, Director of the Machine Perception Laboratory, located in the Calit2 building at UCSD.
The goal of this series is to explore how the brain organizes its network of sensors and actuators to produce adaptive behavior in real time. Understanding how the brain solves this problem may help develop a new generation of robots capable of assisting people in everyday life. The series will include speakers from diverse areas including neuroscience, behavioral science, control theory, computer science, robotics and computer animation.
If you would like to subscribe to the INC Seminar/Talks Mailing list click here...
Time: 11 am - 12 pm
EBU3B, Room CSE 1202 (map)
Title / Abstract:
While several machine vision systems today can each successfully perform one or a few human tasks – such as detecting human faces in point-and-shoot cameras – they are still limited in their ability to perform a wide range of visual tasks, to operate in complex, cluttered environments, and to provide reasoning for their decisions. In contrast, the mammalian visual cortex excels in a broad variety of goal-oriented cognitive tasks, and is at least three orders of magnitude more energy efficient than customized state-of-the-art machine vision systems. This talk will highlight ongoing working towards designing a holistic machine vision system that will approach the cognitive abilities of the human cortex, by developing a comprehensive solution consisting of vision algorithms, hardware design, human-machine interfaces, and information storage.
Vijaykrishnan Narayanan is a Professor of Computer Science and Engineering and Electrical Engineering at The Pennsylvania State University. His research and teaching interests include embedded systems, computer architecture, system design using emerging device technologies and power-aware computing. He has deep interests in cross-disciplinary advances and has led and participated in such projects. He is the deputy editor-in-chief of IEEE TCAD and served as the editor-in-chief for ACM Journal of Emerging Technologies in Computing Systems. He has won several awards including the 2012 ASPDAC Ten-year retrospective Most influential paper, 2012 Penn State Alumni Society Premier Research Award and 2010 Outstanding Alumnus Award from SVCE, India. He is a fellow of IEEE.
For more information, please contact the Chair's Office at (858) 822-5198 or firstname.lastname@example.org.
Sponsored by the The Institute for Neural Computation (INC) and the Temporal Dynamics of Learning Center (TDLC).
The UCSD Division of the California Institute for Telecommunications and Information Technology (Calit2) supported the the use of the Atkinson Hall Auditorium and related expenses for taping and broadcasting the symposium.
Atkinson Hall (Calit2 Building) Room 5302
University of California, San Diego
Time: 1:30 - 5:15 pm
Chair: Terry Sejnowski - Salk/UCSD
Larry Abbott, Columbia University
"Multiple Time Scale of Neuronal Information Processing"
Biological systems, including neural circuits, typically display dynamics over an enormous range of timescales, something that distinguishes them dramatically from most nonliving systems. I will discuss the implications of multi-timescale dynamics for adaptation and memory.
Kwabena Boahen, Stanford University, Bioengineering Dept.
"Neurogrid: Emulating a million Neurons in the Cortex"
I will present a proposal for Neurogrid, a specialized hardware platform that will perform cortex-scale emulations while offering software-like flexibility. Recent breakthroughs in brain mapping present an unprecedented opportunity to understand how the brain works, with profound implications for society. To interpret these richly growing observations, we have to build models—the only way to test our understanding—since building a real brain out of biological parts is currently infeasible. Neurogrid will emulate (simulate in real-time) one million neurons connected by six billion synapses with Analog VLSI techniques, matching the performance of a one-megawatt, 500-teraflop supercomputer while consuming less than one watt. Neurogrid will provide the programmability required to implement various models, replicate experimental manipulations (and controls), and elucidate mechanisms by augmenting Analog VLSI with Digital VLSI, a mixed-mode approach that combines the best of both worlds. Realizing programmability without sacrificing scale or real-time operation will make it possible to replicate tasks laboratory animals perform in biologically realistic models for the first time, which my lab plans to pursue in close collaboration with neurophysiologists.
3:00 PM Break
Michael Breakspear, University of Sydney
"Unpacking the brain into multiscale space: Methods, evidence and models"
Unpacking the brain into multiscale space: Methods, evidence and models. Both the architecture and the dynamics of the brain have characteristic features at different spatial scales. However, the existence, nature and function of dynamical interdependencies between such scales have not been investigated. "Wavelets" - hierarchical families of functions - are natural candidates for modelling and analysing multiscale systems. In this talk, we briefly explicate wavelet functions and then show how they can be used to understand complex neural dynamics. For example, wavelet decompositions of neural models reveal both scale-free and scale dependent dynamics with strong interdependences between scales. Human functional neuroimaging studies of the visual system also show evidence for multiscale interactions. Finally, we explicate a novel theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture and link this to computational theories of brain organization.
Robert Knight, UC Berkeley
"Ultra high gamma in the human electrocorticogram"
Sponsor: Institute for Neural Computation and CALIT2
Affiliation: Kavli Institute for Brain and Mind, UCSD
Atkinson Hall (Calit2 Building) Room 5302
University of California, San Diego
"Geometric Planning in the Posterior Parietal Cortex: Learning Time from Space"
We will investigate the problem of how the brain generates reaching movements. We propose that the brain uses a geometric planner (GP) as an intermediate stage between perception and action. The role of GP is to provide spatial paths independently from the movement dynamics. This implies that earlier sensory inputs to areas in the perceptual system are sufficient to formulate a temporal estimate of movement based on distances rather than having to rely on later feedback from the actual execution of the action. We characterize the behavior of GP with a simple differential equation that links the task space with an abstract representation of the biomechanics to encode postures. Our objective function is the notion of distance defined by the goals of a task, so it changes as a function of the action’s purpose. This scalar function that operates in the posture and the task spaces gives the error from task completion. Its gradient provides the direction that changes the arm posture so that the system gets closer to the goals. The paths thus generated are length minimizers (time-invariant, geodesics) with respect to a task specific distance measure. We have shown that if the equation runs recursively it provides the spatial path useful to estimate movement duration. If the equation unfolds iteratively the guiding geometric signal at each step can be paired with the execution dynamics for on-line error correction. We recorded from neurons in the Parietal Reach Region (PRR) concurrently with arm behavior. The results suggest involvement of the PRR cells in the distance-based (geometric) planning of spatio-temporal aspects of a pending trajectory.
Elizabeth Torres is a postdoctoral fellow in computational and neural systems at the California Institute of Technology (Caltech). She received her Ph.D. in cognitive science from UCSD in 2001, and her thesis explored a "theoretical framework for the study of sensory-motor integration." Torres is interested in the study of goal-directed movement, in particular, natural arm movements in the context of reach for and grasping an object, and her work integrates both behavioral and neurophysiological perspectives. Born in Havana City, Cuba, Torres began university there but later transferred to San Jose State University, where she earned her B.S. in mathematics and computer science.