INC Computational Neuroscience Seminar Series

The Institute maintains a lively seminar series that has brought to the campus distinguished researchers working at the forefront of neural computation.

Organized by:
Institute for Neural Computation and Institute of Engineering in Medicine

Coming Up

Fall 2022 TBD

When:
Where: Hybrid
In-person: Fung Auditorium, Powell-Focht Bioengineering Hall Rm 191
Over Zoom: https://ucsd.zoom.us/j/2888083696
Meeting ID: 288 808 3696

 

Past Talks

2015-5-11

Jumping ahead from 1/3 to 1/2 of understanding image based saliency

Matthias Kümmerer Werner Reichardt Centre for Integrative Neuroscience, Max Planck Institute for Biological Cybernetics, Tübingen
Click here to view the flyer

+ more

Abstract: Among the wide range of complex factors driving where people look, the properties of an image that are predictive for fixations under free viewing conditions have been studied most extensively. Here we frame saliency models probabilistically as point processes, allowing the calculation of log-likelihoods and bringing saliency evaluation into the domain of information theory. We compare the information gain of all high-performing state-of-the-art models to a gold standard and find that only one third of the explainable spatial information is captured. Thus, contrary to previous assertions, purely spatial saliency remains a significant challenge. Our probabilistic approach also offers a principled way of understanding and reconciling much of the disagreement between existing saliency metrics. Finally, we present a novel way of reusing existing neural networks that have been pre-trained on the task of object recognition in models of fixation prediction. Using the well-known network "AlexNet" developed by Krizhevsky et al., 2012, we come up with a new saliency model, "Deep Gaze I", that accounts for high-level features like objects and popout. It significantly outperforms previous state-of-the-art models on the MIT Saliency Benchmark and explains more than half of the explainable information. Joint work with Tomas Wallis, Lucas Theis, and Matthias Bethge

2015-4-6

The Functional Contribution of Synaptic Complexity to Learning and Memory

Surya Ganguli Neural Dynamics and Computation Lab, Stanford University
Click here to view flyer

+ more

Abstract: An incredible gulf separates theoretical models of synapses, often described solely by a single scalar value denoting the size of a postsynaptic potential, from the immense complexity of molecular signaling pathways underlying real synapses. To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states. Moreover, theoretical considerations alone demand such an expansion; network models with scalar synapses assuming finite numbers of distinguishable synaptic strengths have strikingly limited memory capacity. This raises the fundamental question, how does synaptic complexity give rise to memory? To address this, we develop new mathematical theorems elucidating the relationship between the structural organization and memory properties of complex synapses that are themselves molecular networks. Moreover, in proving such theorems, we uncover a framework, based on first passage time theory, to impose an order on the internal states of complex synaptic models, thereby simplifying the relationship between synaptic structure and function.

We also apply our theories to model the time course of learning gain changes in the rodent vestibular oculomotor reflex, both in wildtype mice, and knockout mice in which cerebellar long term depression is enhanced; our results indicate that synaptic complexity is necessary to explain diverse behavioral learning curves arising from interactions of prior experience and enhanced LTD.

2014-10-8

"Dark Knowledge"

Geoffrey Hinton Computer Science Department, University of Toronto, Canada; Distinguished Researcher, Google Inc.
Click here to view flyer

+ more

Abstract: Computer-based vision systems are increasingly indispensable in our modern world. Modern visual recognition systems have been limited though in their ability to identify large numbers of object categories. This limitation is due in part to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows unbounded. One remedy is to leverage data from other sources – such as text data – both to train visual models and constrain their predictions. In this talk I will present our recent efforts at Google to build a novel architecture that employs a deep neural network to identify visual objects employing both labeled image data as well as semantic information gleaned from unannotated text. I will demonstrate that this model matches state-of-the-art performance on academic benchmarks while making semantically more reasonable errors. Most importantly, I will discuss how semantic information can be exploited to make predictions about image labels not observed during training. Semantic knowledge substantially improves "zero-shot" predictions achieving state-of-the-art performance on predicting tens of thousands of object categories never previously seen by the visual model.

2013-10-28

Engineering a Large Scale Vision System by Leveraging Semantic Knowledge

Jonathon Shlens Google Research
Click here to view the flyer

+ more

Abstract: To date, brain-machine interfaces (BMIs) have sought to interface the brain with the external world using intrinsic neuronal signals as input commands for controlling external devices, or device-generated electrical signals to mimic sensory inputs to the nervous system. A new generation of neuroprostheses is now emerging that aims to combine neural recording, signal processing, and microstimulation functionalities for closed-loop operation. These devices might use information extracted from the brain neural activity to trigger microstimulation or modulate stimulus parameters in real time, potentially enhancing the clinical efficacy of neuromodulation in alleviating pathologic symptoms or restoring lost sensory and motor functions in the disabled. This seminar will present a miniaturized system for spike-triggered intracortical microstimulation (ICMS) as a novel, device-based approach for improving functional recovery after traumatic brain injury (TBI). Our current findings from experiments with ambulatory, brain-injured rats using a battery-powered, head-mounted microdevice will be presented. This work has the potential to remarkably advance the neurorehabilitation field at the level of functional neurons and networks.

2013-10-14

Multivariate Data Classification on Neuromorphic Hardware

Michael Schmuker Bernstein Center for Computational Neuroscience Berlin
Click here to view the flyer

+ more

Abstract: Computational neuroscience has uncovered a number of computational principles employed by nervous systems. At the same time, recent neuromorphic hardware provides a fast and efficient substrate for implementations of complex neuronal networks. The current challenge for practical neuromorphic computing applications lies in the identification and implementation of functional algorithms solving real-world computing problems. Taking inspiration from the olfactory system of insects we constructed a generic spiking neural network for the classification of multivariate data, a common problem in signal and data analysis. Our network combines the parallel processing of multiple input dimensions, their decorrelation through lateral inhibition, and supervised learning of data classification. The network runs on an accelerated mixed-signal neuromorphic hardware system. When challenged with real world data sets the network achieves classification performance on the same level as a Naive Bayes classifier. Analysis of the network dynamics shows that stable decisions in output neuron populations are reached within less than 100ms of biological time, which compares well to the time-to-decision reported for the insect nervous system. The network tolerates the variability of neuronal transfer functions and trial-to-trial variation that is inevitably present on the hardware system. Our work provides a proof of principle for the successful implementation of a functional spiking neural network on a configurable neuromorphic hardware system that can readily be applied to real-world computing problems.

Biography: Michael Schmuker has studied biology and computer science in Freiburg, Germany and Montpellier, France. In 2003 he started a PhD in Cheminformatics (specialization on the chemical space of odorants) with Gisbert Schneider in Frankfurt, Germany. He then went on to do a postdoc in Neuroscience with Randolf Menzel in Berlin in 2007. In 2010, he started another postdoc with Martin Nawrot in Berlin in Theoretical Neuroscience and Neuroinformatics. Currently, Michael is a PI in the Bernstein Center for Computational Neuroscience Berlin. The focus of his work lies on sensory computation in the olfactory system, and brain-derived networks for functional neuromorphic applications.

2013-7-26

Universal Artificial Intelligence and Formal Theory of Fun

Jürgen Schmidhuber
Click here to view the flyer

+ more

Abstract: Universal self-improving AIs can rewrite their own software in a provably optimal way. They may not only solve externally posed tasks, but also their own self-invented tasks, to better understand the world, in line with Schmidhuber's simple Formal Theory of Fun and Creativity, which explains science, art, music & humor. The tools for implementing such AIs that include the largest, evolved, vision-based neural network (NN) controllers to date will be described, as well as gradient-based fast, deep/recurrent NNs which have won many recent international pattern recognition competitions.

Biography: Professor Jürgen Schmidhuber is with the Swiss AI Lab IDSIA & USI & SUPSI (ex-TUM CogBotLab & CU). Since age 15 his main scientific ambition has been to build an optimal scientist, then retire. This is driving his research on self-improving Artificial Intelligence. His team won many international competitions and awards, and pioneered the field of mathematically rigorous universal AI and optimal universal problem solvers. He also generalized the many-worlds theory of physics to a theory of all constructively computable universes - an algorithmic theory of everything. His formal theory of creativity & curiosity & fun (1990-2010) explains art, science, music, and humor.

2012-10-1

Performance Limitations of Thalamic Relay: Insights into Thalamo‐Cortical Processing, Parkinson's Disease and Deep Brain Stimulation

Sridevi V. Sarma Johns Hopkins University
Click here to view the flyer

+ more

Abstract: Thalamic networks in the brain are responsible for strategically filtering sensory information subject to attentional demands. For example, one can gaze at a butterfly and completely be unaware of the flowers and bushes that surround it, even though these surroundings are entirely within the subject's visual field. This occurs because visual thalamic neurons only relay the information in the visual field that the subject is paying attention to back to visual cortex for perception. How and when this relay occurs has never been precisely quantified.

In this talk, we utilize a biophysical‐based model to quantify relay of a thalamic cell as a function of its input parameters and electrophysiological properties. Specifically, we compute bounds on relay reliability and show how these bounds can explain experimentally observed patterns of neural activity in the basal ganglia in (i) health where reliability is high, (ii) in Parkinson's disease (PD) where reliability is low, and (iii) in PD during therapeutic deep brain stimulation where reliability is restored. Our bounds also predict different rhythms that emerge in the lateral geniculate nucleus in the thalamus during different attentional states of a cat.

2012-6-13

Dopamine made me do it, but what did I learn?

Peter Redgrave University of Sheffield, Sheffield, U.K

+ more

Abstract: There is general agreement that the basal ganglia play an important role in behavioural selection and reinforcement learning. It is also agreed that within the basal ganglia, the phasic response of midbrain dopaminergic neurones to biologically salient stimuli acts as a reinforcement signal. However, from this point there is less agreement. The majority view is that the dopamine neurones signal reward prediction errors that are used to reinforce the maximisation of future reward acquisition. Au contraire, I will propose that reinforcement learning can be split into independent processes that have been recognised by evolution in the basal ganglia‚s functional architecture: (i)an intrinsic dopamine-reinforced mechanism responsible for the discovery of agency and the development of novel actions; and (ii)a separate mechanism that modulates competing inputs to the basal ganglia so future selections are biased in favour of high value outcomes.

2012-4-11

Mapping the retinal connectome with EyeWire, an online community for 'citizen neuroscience

Sebastian Seung Howard Hughes Medical Institute, MIT

+ more

Abstract: According to a doctrine known as connectionism, brain function and dysfunction depend primarily on patterns of connectivity between neurons. Connectionism has been explored theoretically with mathematical models of neural networks since the 1940s. It has proved difficult to test these models through activity measurements alone. For conclusive empirical tests, information about neural connectivity is also necessary, and could be provided by new imaging methods based on serial electron microscopy. The bottleneck in using these new methods is now shifting to the data analysis problem of extracting neural connectivity from the images. Our capacity to acquire "big data" from the brain has far outpaced our ability to analyze it. My lab has been developing computational technologies to deal with this data deluge. Based on these innovations, we have recently launched EyeWire, an online community that mobilizes the public to map the retinal connectome by interacting with one another and with artificial intelligence based on machine learning.

2012-3-5

Bayesian Inference with Oscillator Models: A Possible Role of Neural Rhythms

Prashant Mehta University of Illinois at Urbana-Champaign

+ more

Abstract: Prediction is believed to be a fundamentally important computational function for any intelligent system. Bayesian inference in probability theory is a well-known mechanism to implement prediction. This has led to historical and recent interest in Bayesian inference for biological sensory systems: The Bayesian model of sensory (e.g., visual) signal processing suggests that the cortical networks in the brain encode a probabilistic 'belief' about reality. The belief state is updated based on comparison between the novel stimuli (from senses) and the internal prediction. A natural question to ask then is whether there is a rigorous methodology to implement complex forms of prediction via Bayes rule at the level of neurophysiologically plausible spiking elements? In this talk, I will provide a qualified answer to this question via coupled oscillator models. A single oscillator is a simplified model of a single spiking neuron. The coupled oscillator model solves an inference problem: The population encodes a belief state that is continuously updated (in a Bayes optimal fashion) based on noisy measurements. The methodology is described with the aid of a model problem involving estimation of a `walking gait cycle' using noisy measurements. This is joint work with several students and collaborators at the University of Illinois.

Biography: Prashant Mehta is an Associate Professor in the Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign. He received his Ph.D. in Applied Mathematics from Cornell University in 2004. Prior to joining Illinois, he was a Research Engineer at the United Technologies Research Center (UTRC). His research interests are at the intersection of dynamical systems and control theory, including mean- field games, model reduction, and nonlinear control. He has received several awards including an Outstanding Achievement Award for his research contributions at UTRC, several Best Paper awards together with his students at Illinois, and numerous teaching and advising honors at Illinois.

2011-11-28

Cross-level coupling between single neurons and large-scale LFP patterns in multi-scale brain networks

Ryan T. Canolty Helen Wills Neuroscience Institute & University of California, Berkeley
Click here to view the flyer

+ more

Abstract: Brains exhibit structure across a variety of different scales – from single neurons (micro-scale) to functional areas (meso-scale) to large-scale cortical networks (macro-scale). Furthermore, the different levels of multi-scale brain networks often interact with each other – that is, activity and information at one level can influence other levels, a phenomenon termed cross-level coupling (CLC). Neuronal oscillations have been suggested as a possible mechanism for dynamic cross-level coordination, but the functional role of oscillations in multi-scale networks remains unclear. We investigated CLC by recording local field potentials (LFPs) and single unit activity using multiple microelectrode arrays in several brain areas of the macaque, and then modeled the dependence of spike timing on the full pattern of proximal and distal LFP activity. We show that spiking activity in single neurons and neuronal ensembles depends on dynamic patterns of oscillatory phase coupling between multiple brain areas, in addition to the effects of proximal LFP phase and amplitude. Neurons that prefer similar patterns of LFP phase coupling exhibit similar changes in spike rates, potentially providing a basic mechanism to bind different neurons together into coordinated cell assemblies. Surprisingly, CLC-based spike rate correlations are independent of inter-neuron distance – that is, two neurons in opposite hemispheres may prefer the same global LFP pattern and exhibit correlated rate changes, while two neurons recorded on the same electrode may prefer different global LFP patterns and exhibit uncorrelated spiking activity. CLC patterns correlate with behavior and neural function, remain stable over multiple days, and show reversible, task-dependent shifts when engaging in multiple tasks. These findings suggest that neuronal oscillations enable selective and dynamic control of distributed functional cell assemblies, supporting the hypothesis that CLC may play a key role in the functional reorganization of dynamic brain networks.

2010-11-5

Neocortical Dark Matter, Grandmother Cells and the puzzle of extremely long-term memories

Simon Thorpe CNRS Research Director, Brain and Cognition Research Center (CERCO), Toulouse, France

+ more

Abstract: Humans can recognize images and sounds that they have not seen or heard for decades. How is this possible, given that the molecules from which the brain is made have presumably all been replaced many times over? Presumably, very long-term memories are stored in the patterns of synaptic connectivity, but most models of associative memory based on distributed representations would have difficulty in maintaining memories intact for so long because patterns would tend to be overwritten by incoming stimuli. Here I would like to propose the idea that such long term memories could depend on the highly selective cortical neurons that essentially never fire, allowing them to remain selective over very long periods of time. I will discuss a range of theoretical, stimulation and experimental data that support this proposal that a substantial proportion of neocortical neurons could in reality constitute dark matter - effectively invisible to conventional neuropsychological techniques.