Search    
Find    

INC Chalk Talk Series

The INC chalk talk series meets bi-weekly as a forum for interactive exchange on all aspects of neural computation. The purpose of these meetings is to foster the collaborative interactions between INC members and with colleagues across campus, and to stimulate new ideas and research initiatives.

Each meeting features one of the core or affiliated INC faculty labs/groups, with informal presentation of late-breaking research and new research directions. The meetings are open to the community, and we encourage broad participation across campus.

If you would like to subscribe to the INC Seminar/Talks Mailing list click here...

Contact: chalkinc.ucsd.edu for further information, or to schedule a presentation.

When: Thursdays bi-weekly Fall through Spring


Winter 2017

Mark McDonnell: (03/23/17)

Sponsor: Institute for Neural Computation Chalk Talk Series


Affiliation:

University of South Australia

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:
Reduced-memory deep residual networks for image classification using stochastic quantization

Motivated by the goal of enabling more efficient learning in deep neural networks, we describe a method for modifying the backpropagation algorithm that significantly reduces the memory usage during the training phase. The method is inspired by recent work on seeking neurobiological correlates of backpropagation-based learning that calculate gradients imprecisely. Specifically, our method introduces stochastic binarization of hidden-unit activations for use in the backward pass, after they are no longer used in the forward pass. We show that without stochastic binarization the method is far less effective. We trained wide residual networks with 20 weight layers on the CIFAR-10 and CIFAR-100 image classification benchmarks, achieving error rates of 5.43\%, 23.01\% respectively. These error rates compare with 4.53\% and 20.51\% on the same network trained without stochastic binarization. Moreover, we also investigated learning binary-weights in deep residual networks and demonstrate, for the first time, that Reduced-memory deep residual networks for image classification using stochastic quantizationnetworks using binary weights at test time can perform equally to full-precision networks on CIFAR-10, with both achieving ~4.5%. On Imagenet, we are still experimenting, but to date our binary-weights method at test time had a top-5 error rate of 20%.


Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Emre Neftci: Neuromorphic Deep Learning Machines (03/09/2017)

Sponsor: Institute for Neural Computation Chalk Talk Series


Affiliation:

UCI

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:
Neuromorphic Deep Learning Machines

An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory, and precise operations that are difficult to realize in neuromorphic hardware.

Remarkably, recent work showed that exact backpropagated weights are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. The rule requires only one addition and two comparisons for each synaptic weight using a two-compartment leaky Integrate & Fire (I&F) neuron, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving nearly identical classification accuracies on permutation invariant datasets compared to artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.


Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Michael Yip: Towards Autonomous Surgery Delivered by Expert Robots (03/02/2017)

Sponsor: Institute for Neural Computation Chalk Talk Series


Affiliation:

UCSD

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:
Towards Autonomous Surgery Delivered by Expert Robots

Surgical robotics offers an unprecedented ability to place and dexterously control small robotic instruments, immersive stereo imaging and other sensing modalities deep within inaccessible locations in the body. This presents major opportunities to in the medical domain to treat diseases (e.g. cardiac arrhythmia, lung cancer, colon cancer) in a minimally invasive fashion beyond. Yet, as these devices get smaller, more flexible and more mechanically complex, we are presented with a new challenge: do we rely on the doctor to sort out the challenging control of the devices while simultaneously processing the multi-modal biosignals from onboard sensing? Or do we off-load the low-level control of the surgery from human teleoperation onto a semi-autonomous or fully-autonomous framework? I will discuss our work in developing robot-assisted surgeries that analyze a multimodal spectrum of sensory information, physics models, and imaging information in real-time to optimally plan and perform semi-autonomous surgery. This includes real-time learning-based controllers for automating catheter and endoscopic robots within difficult anatomy, modular snake-like devices for efficient locomotion in difficult environments, visual computation methods for image-guided robotics, and robot intelligence for robot-human teams. Finally, I will discuss directions we aim to pursue in reinforcement learning such that with limited self-training, our robot-assistive devices learn to become expert robot surgeons.


Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 


Fall 2016

Douglas A. Palmer: (12/01/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series


Affiliation:

Douglas A. Palmer, KnuEdge Inc.

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:
Design of a heterogeneous neural network accelerator ASIC

In an effort to accelerate large-scale, sparse, heterogeneous neural network modeling a dedicated ASIC was designed, produced, and tested. The resulting device, a joint effort between Calit2 and KnuEdge Inc., is a router based, cloud-on-a-chip, 256-core, MPMD (Multiple-Program Multiple Data), machine that scales to 512K devices. Latency between devices is less than 400 ns. And random addressing benchmarks (GUPs) exceed 1 billion. Performance testing has shown that it is many times faster than existing CPU and GPU architectures for scatter/gather operations such as K-means clustering, FFTs, and heterogeneous sparse neural network models.

Bio:
Dr. Palmer specializes in unconventional signal processing. He holds over a dozen U.S. patents and has founded or participated in the startup of many companies. He spent 8 years at the Stanford Linear Accelerator and then went on at Linkabit Corp, Western Research Corporation, became head of R&D Director at Hecht-Nielsen Neurcomputer, and then moved on to ThermoTrex, a subsidiary of ThermoElectron. In 1998 Dr. Palmer cofounded Path1 Network Technologies where he developed the world’s first video over IP systems. In 2002 he joined Calit2 at UCSD. He has been working with KnuEdge Inc. since 2006. Dr. Palmer received his MPhil and Ph.D. in High Energy Physics from Yale University after earning his B.A. in physics from UCSD Revelle College.



Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Patrick Shoemaker: (11/17/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series


Affiliation:

Patrick Shoemaker, Computational Science Research Center, SDSU, TBA

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:
Multistable Winner-Takes-All neural networks with NMDARs and feedback inhibition

As a result of magnesium blockade, the macroscopic current-voltage relation of ion channels associated with the NMDA class of glutamatergic receptors is nonmonotonic. In conjunction with other membrane conductances, this feature can give rise to bi- and multi-stable dynamical regimes in neurons that have NMDA receptors. I describe a very simple neuronal network that displays winner-takes-all behavior as a consequence of this property. I first discuss the properties of this network under stationary or quasistatic conditions, and then proceed to consider dynamics, in particular network stability.

Bio:
Pat Shoemaker received the Ph.D. degree in Bioengineering from UCSD in 1984. He has a longstanding interest in neural information processing and bio-inspired systems. From 1984 to 1999 he was with the Space and Naval Warfare Systems Center, where he worked among other things on hardware implementations of artificial neural networks. From 1999 to 2015 he was with Tanner Research, Inc., where he focused on bio-inspired systems and developed an growing interest in natural neural networks. Since the early 2000's he has collaborated with several neurobiologists on studies of visual processing in insects. He is currently a Research Associate Professor at the Computational Science Research Center at SDSU.

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Andrea Ravignani: (11/03/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series


Affiliation:

Andrea Ravignani, Vrije Universiteit Brussel

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:
Rhythm in speech, music and movement: towards a common analytical framework for temporal structure

Behavioural research on the temporal properties of speech, music and movement often requires quantification of rhythmic structure. However, different research traditions investigating rhythmic behaviours have different methodologies, hindering comparability. Here, I present a suite of analytical tools to quantify rhythmic patterns across behaviours and domains. In particular, I focus on meaningful interpretation of simple techniques borrowed across disciplines, such as the normalised pairwise variability index, phase space plots, auto-regressive time series, and Granger causality. For each technique, I show its application to speech and music corpora, human psychological experiments, or chimpanzee behaviour.

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Amir Khosrowshahi: (10/20/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series


Affiliation:

Amir Khosrowshahi, Nervana, https://www.nervanasys.com/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:
New processor architecture for machine learning

Nervana is a San Diego-based startup providing a cloud platform for deep learning as a service. Deep learning is now state-of-the-art in a wide variety of domains including speech, images, and text, and is being quickly adopted in industry. Nervana's core technology is a novel distributed processor architecture for deep learning which aims to improve speed, scalability, and efficiency by an order of magnitude over the current state-of-the-art. I will present our work in the context of a variety of various promising efforts to build new hardware for advancing computation.

Bio:
Amir Khosrowshahi is co-founder and CTO of Nervana. He studied computational neuroscience at Berkeley and physics and math at Harvard. Nervana was recently acquired by Intel where Amir is now VP of machine learning solutions in its data center group.

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Hesham Mostafa: Rhythmic activity drives efficient search for maximally consistent states in neural networks and neuromorphic chips (10/06/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series


Affiliation:

Hesham Mostafa
Integrated System Neuroengineering Lab, UCSD


Location
:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Rhythmic activity drives efficient search for maximally consistent states in neural networks and neuromorphic chips

Humans and animals display a remarkable ability for constructing a rich and consistent interpretation of the surrounding environment based on imperfect and incomplete sensory inputs. This is a challenging problem that can be formulated as finding a configuration of variables that maximally satisfies a set of constraints encoding a model of the environment, while being consistent with the observed sensory input. We show that this problem can be efficiently solved using simple coupled attractor networks if these networks include a basic model of Gamma-band oscillations. By dynamically modulating the effective network connectivity, neuronal rhythms allow simple networks to collectively and efficiently search for maximally consistent configurations. We show that these rhythms give rise to network behavior that is functionally very similar to that of stochastic networks, providing an alternative framework for modeling probabilistic reasoning in the brain.
Since the oscillatory networks can efficiently solve difficult constraint satisfaction problems (CSPs), we developed a neuromorphic VLSI chip that captures the salient features of these networks and used the chip to solve Boolean satisfiability (SAT) and graph coloring problems. Empirically, we have shown that in the case of SAT problems, the search implemented by interacting oscillatory elements is as efficient as state of the art stochastic search algorithms. Our results highlight the benefits and pitfalls involved in taking neural dynamics in the brain as a source of inspiration for building physically realizable, non von-Neumann computing models, and they establish an unexpected and fundamental link between CSPs and the behavior of simple oscillatory systems.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Filip Piekniewski: Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network (09/22/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series


Affiliation:

Filip Piekniewski
Brain Corp.


Location
:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Will Browne: (09/06/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series


Affiliation:

Will Browne


Location
:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Cognitive Learning using Evolutionary Computation

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

 

Spring 2016

Ulysses Bernardet: (05/05/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Simon Fraser University, Surrey
https://sites.google.com/site/bernuly/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

At each moment in time an animal is faced with a myriad of behavioral options; why does an animal initiate and persist in certain behaviors as opposed to others? Thematically this question of action selection and behavior regulation stands at the core of much of my past and present research. I will begin by presenting work on systems theory and neurobiology based models of social motivation and behavior regulation in insects, respectively. This will be followed by presenting current work that uses autonomous virtual characters to develop and test psychologically grounded models of nonverbal behavior. These models include the regulation of spatial behavior in a social setting, and work on a reflexive behavior architecture for virtual humans.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Joaquin Rapela: Our Brain Oscillations Follow Our Motor Rhythms (04/21/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Swartz Center for Computational Neuroscience, INC, UCSD
http://sccn.ucsd.edu


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Our Brain Oscillations Follow Our Motor Rhythms

A remarkable early observation on brain dynamics (Adrian and Mathews, 1934) is that when humans are exposed to rhythmic stimulation their brain oscillations can follow this rhythm. More recently, it has been found that attention can adjust the way in which oscillations follow periodic stimulation, in such a way that neurons are in a state of maximal excitability when an attended stimulus is expected to occur (Lakatos et al., 2008). Using what today are the neural recordings with highest spatial resolution, directly from the cortical surface of humans (ECoG grid with 4 mm interelectrode separation; Bouchard et al., 2013), covering most speech production and perception brain regions, I will describe a recent finding on this fascinating field of brain rhythms: when we speak in a rhythmic fashion, our brain oscillations follow our speech rhythm. Evidence for this finding comes from the alignment of the phases of brain oscillations at behaviorally relevant time points (highlighting the role of phase coherence in understanding the neural code; Makeig et al. 2002), from the coupling between low-frequency brain oscillations related to behavior and high-frequency oscillations related to neural spiking (phase-amplitude coupling; Canolty et al, 2006), and from the detection of traveling waves confined to the brain region that controls the vocal articulators (Rubino et al, 2006). This research is still on early stages, but it is worth sharing with the UCSD community.

 

Adrian ED, Matthews BH. The interpretation of potential waves in the cortex. J Physiol. 1934 Jul 31;81(4):440-71.

Bouchard KE, Mesgarani N, Johnson K, Chang EF. Functional organization of human sensorimotor cortex for speech articulation. Nature. 2013 Mar 21;495(7441):327-32.

Canolty RT, Edwards E, Dalal SS, Soltani M, Nagarajan SS, Kirsch HE, Berger MS, Barbaro NM, Knight RT. High gamma power is phase-locked to theta oscillations in human neocortex. Science. 2006 Sep 15;313(5793):1626-8.

Lakatos P, Karmos G, Mehta AD, Ulbert I, Schroeder CE. Entrainment of neuronal oscillations as a mechanism of attentional selection. Science. 2008 Apr 4;320(5872):110-3.

Makeig S, Westerfield M, Jung TP, Enghoff S, Townsend J, Courchesne E, Sejnowski TJ. Dynamic brain sources of visual evoked responses. Science. 2002 Jan 25;295(5555):690-4.

Rubino D, Robbins KA, Hatsopoulos NG. Propagating waves mediate information transfer in the motor cortex. Nat Neurosci. 2006 Dec;9(12):1549-57.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Lyle Muller: Multichannel recordings in neuroscience: methods for spatiotemporal dynamics (04/14/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Salk Institute for Biological Studies
snl.salk.edu/~lmuller/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:Multichannel recordings in neuroscience: methods for spatiotemporal dynamics

Multichannel recording techniques in neuroscience have recently come of age. From dense multielectrode arrays to large-scale optical imaging techniques, novel recording technologies can now capture the fast dynamics of active cortical circuits in vivo. These technologies present the opportunity to probe the spatiotemporal dynamics of cortical circuits across a wide range of network states, from active sensation to the internally generated oscillations of sleep.

Concomitant with the rise of these technologies, however, is the need for novel and precise computational methods that can see through recording noise and capture the full complexity of cortical activity states. In recent work, we have introduced a non-parametric, phase-based method for detecting traveling waves in noisy multichannel data. This method requires no spatial smoothing, thus minimizing signal distortion and controlling false detections. Analysis of voltage-sensitive dye (VSD) imaging data from the visual cortex of the monkey with this method revealed that the population response to a small visual stimulus travels as a wave across the cortex, with a specific trial invariance. Extending this computational approach to more general spatiotemporal forms, we have now begun to study the large-scale structure of oscillations in electrocorticogram (ECoG) recordings of human cortex during sleep, where we find that a well-known sleep oscillation exhibits a specific, robust spatiotemporal pattern.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Thorsten O. Zander: Towards Neuroadaptive Technology: Symmetrical Human‐Computer Interaction based on a cognitive user model generated by automatically probing the operator's mind (04/07/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Team PhyPA, Biological Psychology and Neuroergonomics, TU Berlin, Germany


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: "Towards Neuroadaptive Technology: Symmetrical Human‐Computer Interaction based on a cognitive user model generated by automatically probing the operator's mind"

Today's human‐machine interaction is asymmetrical in the sense that (a) the operator has access to any and all details concerning the machine's internal state, while the machine only has access to the few commands explicitly communicated to it by the human, and (b) while the human user is capable of dealing with and working around errors and inconsistencies in the communication, the machine is not. With increasingly powerful machines this asymmetry has grown, but our interaction techniques have remained the same, presenting a clear communication bottleneck: users must still translate their high level concepts into machine‐mandated sequences of explicit commands, and only then does a machine act. During such asymmetrical interaction the human brain is continuously and automatically processing information concerning its internal and external context, including the environment the human is in and the events happening there. I will discuss how this information could be made available in real time and how it could be interpreted automatically by the machine to generate a model of its operator's cognition. This model then can serve as a predictor to estimate the operator's intentions, situational interpretations and emotions, enabling the machine to adapt to them. Such adaptations can even replace standard input, without any form of explicit communication from the operator. I will illustrate this approach by several brief examples. The above‐mentioned cognitive model can be refined continuously by giving agency to the technological system to probe its operator's mind for additional information. It could deliberately and iteratively elicit, and subsequently detect and decode cognitive responses to selected stimuli in a goal‐directed fashion. Effectively, the machine can pose a question directly to a person's brain and immediately receive an answer, potentially even without the person being aware of this happening. This cognitive probing allows for the generation of a more fine‐grained user model. It can be used to fully replace any direct input to the machine, establishing effective, goal‐oriented implicit control of a computer system. I will give a more detailed example showing the potential of this approach. These approaches fuse human and machine information processing, introduce fundamentally new notions of 'interaction', and allow completely new neuroadaptive technology to be developed. This technology bears specific relevance to auto‐adaptive experimental designs, but opens up paradigm shifting possibilities for human‐machine systems in general, addressing the issue of asymmetry and widening the above‐mentioned communication bottleneck.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 



Winter 2016

Mark D. McDonnell: "A neurobiological learning model inspired by deep learning, and its application to image classification" (03/10/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Computational and Theoretical Neuroscience Laboratory, School of Information
Technology and Mathematical Sciences, University of South Australia, Australia
http://ctnl.unisa.edu.au

 

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: "A neurobiological learning model inspired by deep learning, and its application to image classification"

In computer science, 'deep learning' approaches are at last realizing the decades-old theoretical potential of artificial neural networks (ANNs), now to frequently achieve better-than-human performance on difficult pattern recognition tasks. When applied to classification and detection of objects in images, deep convolutional ANNs are used, and are often characterized as "biologically inspired." This is due to the hierarchy of layers of nonlinear processing units and pooling stages, and learnt spatial filters resembling simple and complex cells. An open challenge for computational neuroscience is to identify whether the spectacular performance of deep learning can be replicated in detailed models of cortical neurobiology that are constrained by known anatomical and physiology. Of particular importance is to identify neurobiologically-plausible learning rules that can produce equal performance to the backpropagation and stochastic gradient descent algorithms used as standard methods when training deep ANNs. Motivated by this goal, in this talk I will show mathematically how a standard cost-function used for supervised training of ANNs can be decomposed into an unsupervised decorrelation stage and a supervised Hebbian-like stage. Using the method to train a network with the MNIST handwritten digits image database results in classification of the MNIST test image set with less than a 1% error rate. This performance is comparable with state of the art deep-learning algorithms applied to this well-known benchmark. Surprisingly, this result is achieved by relying on untrained random synaptic weights and/or convolutional filters in all network layers except the final one. In the remainder of the talk I will posit that the method is plausible as a neurobiological learning mechanism in recurrently-connected layer 2/3 and layer 4 cortical neurons. I will demonstrate this using a conceptual model that includes:

* nonlinear dendritic activation;

* anti-Hebbian plasticity at synapses on distal dendrites receiving lateral input from other principal cells;

* top-down modulation during learning;

* lateral inhibition enforcing winner-take-all effects to determine inference.

 

Biography:

A/Prof. Mark D. McDonnell received a PhD in electronic engineering and applied mathematics
from The University of Adelaide, Australia, in 2006. He is currently Associate Research Professor at the University of South Australia, which he joined in 2007. He has been awarded two research fellowships by the Australian Research Council, from 2007-2009 and 2010-2014, and the South Australian Tall Poppy of Science award. McDonnell's research focuses of the use of computational and engineering methods to advance knowledge about the influence of noise and random variability in neurobiological computation. McDonnell has published over 80 refereed papers, including several review articles, and a book on stochastic resonance, published by Cambridge University Press. McDonnell is a member of the editorial board of PLoS One and Fluctuation and Noise Letters, and has served as a Guest Editor for Proceedings of the IEEE and Frontiers in Computational Neuroscience.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Jorge Jose: "Micro-movement statistics biomarkers may help diagnose and develop therapies for individuals with Autism Spectrum Disorders" (03/03/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
James H. Rudy Distinguished Professor of Physics
Condensed Matter Physics and Biophysics (Theoretical)
http://www.iub.edu/~iubphys/faculty/jjosev.shtml


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: "Micro-movement statistics biomarkers may help diagnose and develop therapies for individuals with Autism Spectrum Disorders"


Our daily movements are made of variable behaviors that can be studied at different time and length scales: For example, most people can easily achieve the simple task of reaching a cup in front them, but no two people will have exactly the same movements when we zoom in their trajectories at millisecond time scales. Most current movement studies are mainly based on visual observations of performances in motor tasks, which may leave out important information at finer time scales, often considered as noise. Atypical behaviors are actually highly heterogeneous in people with neurological disorders, e.g. like Autism Spectrum Disorders (ASD), Parkinson and Schizophrenia. This heterogeneity has particularly impeded developing efficient and quantitative biological diagnoses for these disorders when they are only based on human eye observations. There is thus a critical need to identify objective and data-driven biomarkers for these disorders as guides for basic biological research studies. Recent advent of high-resolution wearable sensing devices enable continuous motion recordings at milliseconds time scales, away from detection of the naked eye. Using this technology, we asked the question as to whether we could extract information leading to quantitative biomarkers for these disorders based on natural movement studies. I will only discuss our results for ASD individuals. By studying in detail the movement's statistics of human natural hand movements, we unraveled a new data-type characterized by the smoothness levels of the speed kinematics. Our statistical analysis led to a parameter plane that provides an automatic screening of different ASD subjects linking it, a posteriori, with their verbal speaking abilities. We also found different maturation paths in ASD compared to those typically developing. Unexpected similarities are also found among ASD parents and their progenies. Our studies are presently being used as part of a clinical trial testing for a genetically generated type of Autism.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Aaron Seitz: Applying Perceptual Learning Principals to Brain Training Games (02/25/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Professor, Department of Psychology, and Director of the Brain Games Center
University of California, Riverside
http://faculty.ucr.edu/~aseitz/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Applying Perceptual Learning Principals to Brain Training Games


Imagine if you could see better, hear better, have improved memory, and even become more intelligent through simple training done on your own computer, smartphone, or tablet. Currently brain training approaches are making these promises, however the reality falls short of the potential. Here I discuss how research in the field of perceptual learning can be translated to potentially yield a new generation of brain training approaches that are more effective and transfer to real world activities. In the present research, we adopted an integrative approach where the goal is not to achieve highly specific learning but instead to achieve general improvements to vision. We combined multiple perceptual learning approaches that have individually contributed to increasing the speed, magnitude and generality of learning into a perceptual-learning based video-game. Our results demonstrate broad-based benefits of vision in a healthy adult and visually impaired populations. We find improvements in near and far central vision peripheral acuity and contrast sensitivity, and real world on-field benefits in baseball players. The use of this type of this custom video game framework built up from psychophysical approaches takes advantage of the benefits found from video game training while maintaining a tight link to psychophysical designs that enable understanding of mechanisms of perceptual learning and has great potential both as a scientific tool and as a basis for future brain training approaches.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Nadir Weibel: Computational Ethnography and Multimodal Sensing for Healthcare (02/18/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
CSE Department,
DesignLab, Center for Wireless and Population Health Systems, Calit2


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Computational Ethnography and Multimodal Sensing for Healthcare

The advent of new sensing modalities, from ubiquitous and mobile computing to big data, is opening up new avenues for better understanding human cognition and behavior. Technology such as depth cameras, eye-tracking, or wearable sensing devices enable the tracking of people's activity in the real world, and online social media presence often reveals much of our day-to-day lives. While these new kind of data promise to advance our knowledge in many domains, applying this technology to healthcare has the potential to have an impact on the lives of many people from single individuals to larger groups.

In this talk I will introduce our approach towards new methodologies for multimodal sensing and visualization of healthcare-related activity in the real world. I will introduce our Lab-in-a-Box infrastructure, and how the combination of a multimodal sensing infrastructure and a multimodal visualization tool allow us to understand real-world healthcare in different ways. I will discuss results from tracking activity in the medical office and introduce our initial work in the context of surgical ergonomics, stroke evaluation and sign language analysis, including novel visualization approaches.

 

Bio
----------------------------
Dr. Nadir Weibel is a Research Faculty at UC San Diego's CSE Department and a Research Health Science Specialist at the VA San Diego. His work spans computer science and engineering, cognitive science, and the health domain and focuses on studying the impact of interactive technology on healthcare. As a member of the DesignLab (http://designlab.ucsd.edu), as well as the Center for Wireless and Population Health Systems (http://cwphs.ucsd.edu) at UCSD he is spending his time between developing novel methodologies to better understand behavior and activity in healthcare, and designing new prototypes and interactive technology at the intersection of Human-Computer Interaction and Ubiquitous computing to better support patients, care-givers and health professionals. His research is funded by the National Institute of Health (NIH), the National Science Foundation (NSF), the Center for AIDS Research (CFAR), the Agency for Healthcare Research and Quality (AHRQ), as well as by UC San Diego internal funding and the Moxie Foundation.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Mateusz Gola: Can porn be addictive? The use of the Research Domain Criteria (RDoC) framework in studies of new psychological disorders. (02/11/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Swartz Center for Computational Neuroscience, UCSD,

Institute of Psychology, Polish Academy of Sciences


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title: Can porn be addictive? The use of the Research Domain Criteria (RDoC) framework in studies of new psychological disorders.

 

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Vivienne Ming: Engineering Superpowers: Leveraging Theoretical Neuroscience to Maximize Human Potential (02/04/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Founder & Executive Chair
Socos https://www.socoslearning.com/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Engineering Superpowers: Leveraging Theoretical Neuroscience to Maximize Human Potential

A wide-variety of societal problems can be framed as the challenge of connecting abstract, longer-term gains to highly local, individual decisions.

How can smartphone data across tens of thousands of individuals predict manic episodes in bipolar sufferers for prophylactic treatment? What should a recruiter look for in a candidate to optimize company-wide productivity over time? What can a parent do right now to maximize a child's health and educational outcomes?

In this talk, Dr. Ming will discuss a series of projects which apply theoretical neuroscience methodology to high-level problems in computational social science and are deployed in "the wild". Dr. Ming's goal is to maximize human potential by combining neuroscience, labor economics, machine learning, and product development.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Zewelanji Serpell: Training for Transfer: Opportunities and Challenges for Application in Schools (01/21/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Associate Professor, Dept. of Psychology
Virginia Commonwealth University
http://www.psychology.vcu.edu/people/serpell.shtml


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Training for Transfer: Opportunities and Challenges for Application in Schools

Recent advances in cognitive science support the view that cognitive skills, such as executive functions, are malleable in childhood and through adolescence. This talk presents findings from a set of studies testing the efficacy of one-on-one and computer-based cognitive training programs with adolescents in lab and school settings. Findings suggest some success in improving cognitive skills, particularly working memory. Training modality matters, however, and there is little evidence of far transfer to academic skills. The talk goes on to describe our efforts to develop more ecologically valid and culturally responsive methods to train African American elementary school students by applying cognitive training principles within a school-based chess program. To conclude, I discuss the challenges associated with achieving and measuring transfer of cognitive training gains to academic and behavioral domains that are meaningful to schools.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Tim Mullen: Towards Pervasive and Real-World Neuroimaging and BCI (01/14/2016)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Director, Qusp Labs (formerly Syntrogi Labs)
Co-Founder & CEO, Qusp

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Towards Pervasive and Real-World Neuroimaging and BCI

I will discuss and demonstrate recent efforts by our group towards evolving a new generation of real-world and pervasive brain-computer interface (BCI) and neuroimaging technology. I will discuss some of our recent research in this domain, including a recent collaboration between Qusp, Cognionics and INC developing a high-resolution dry mobile BCI system supporting real-time artifact rejection, imaging of distributed cortical network dynamics, and inference of cognitive state with a 64-channel dry-electrode wireless EEG headset. I will also briefly outline Qusp's vision of enabling easy integration of advanced bio-signal processing methods into diverse everyday applications. I will discuss and demonstrate applications of NeuroScale - a cloud-based software platform, providing continuous real-time interpretation of brain and body signals through an Internet API - as well as Neuropype - a Python-based graphical software environment for rapid design and deployment of pipelines for (real time) bio-signal processing and machine learning.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

 

Fall 2015

Joe Snider: Prospective optimization with limited resources (11/19/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Institute for Neural Computation, UCSD
http://inc.ucsd.edu/~poizner/jsnider.html

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Prospective optimization with limited resources

The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. We designed a task in which humans select actions from an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touch screen at variable speeds. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching disks one at a time in a rapid sequence, forming an upward path across the grid. Every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of the natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. Comparisons of human behavior with the behavior of ideal actors revealed the strategies used by humans in terms of how far into the future they looked (their "depth of computation") and how often they attempted to incorporate new information about the future rewards (their "recalculation period"). For a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase their recalculation period rather than sacrifice the precision of computation.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Lewis Chuang : Beyond Steering in Human-Centered Closed-Loop Control (11/05/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Max Planck Institute for Biological Cybernetics
http://www.lewischuang.com

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Beyond Steering in Human-Centered Closed-Loop Control

Machines provide us with the capacity to achieve goals beyond our physical limitations. For example, automobiles and aircraft extend our physical mobility, allowing us to travel vast distances far ahead of the time it would take us otherwise. It is truly remarkable that our natural perceptual and motor capabilities are able to adapt, with sufficient training, to the unnatural demands posed by vehicle handling. While much progress has been achieved in formalizing the control relationship between the human operator and the controlled vehicle, considerably less is understood with regards to how human cognition influences this control relationship. Such an understanding is particularly important in the prevalence of autonomous vehicular control, which stands to radically modify the responsibility of the human operator from one of control to supervision. In this talk, I will first explain how the limitations of a classical cybernetics approach can reveal the necessity of understanding high-level cognition during control, such as anticipation and expertise. Next, I will present our research that relies on unobtrusive measurement techniques (i.e., gaze-tracking, EEG/ERP) to understand how human operators seek out and process relevant information whilst steering. Examples from my lab will be used to demonstrate of how such findings can effectively contribute to the development of human-centered technology in the steering domain, such as with the use of warning cues and shared control. Finally, I will briefly present some efforts in modeling an augmented aerial vehicle (e.g., civil helicopters), with the goal of making flying a rotorcraft as easy as driving (www.mycopter.eu).

 

Biography: Lewis Chuang received his PhD. In Neuroscience in 2011 from the University of Tübingen. He currently leads a research group in the Max Planck Institute for Biological Cybernetics that investigates information seeking and processing behavior during closed-loop steering. He is also a principal investigator in a recently established research center for Quantitative Methods for Visual Computing (www.trr161.de).

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Ryad Benosman: A Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony (11/03/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Vision and Natural Computation Group
Institut National de la Sante et de la Recherche Medicale, Paris, France

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: A Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony

There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Marcela Mendoza: Bayesian Inference in Distributed Architecture for Mobil Applications (10/29/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Bioengineering, and Neural Interaction Lab, UCSD
http://coleman.ucsd.edu/

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:: Bayesian Inference in Distributed Architecture for Mobil Applications

Emerging mobile applications necessitate wireless transmission of large datasets and generate the need for efficient energy consumption. Exactly digitizing and transmitting these data is energy costly and leaves devices vulnerable to security attacks. Most decisions made with these data are statistical. From a Bayesian point of view, an accurate way to represent uncertainty and minimize risk in decision-making is via the posterior distribution. However, a way of accurately calculating the posterior has been traditionally unobtainable.

In this talk, I will present a distributed framework for finding the full posterior distribution and show its implementation in a suit of energy-efficient architectures. We focus on problems where the latent signal can be modeled as sparse (LASSO). We leverage our recent results of formulating Bayesian inference as a KL divergence minimization problem. We show that drawing samples from the Bayesian LASSO posterior can be done by iteratively solving LASSO problems in parallel. We instantiate this result with an analog-implementable solver and with a Graphics Processor Unit solution. These architectures are amenable to mobile applications and only transmit the minimal relevant information (e.g. the posterior) for optimal decision-making.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Arnaud Delorme: "EEGLAB -- Recent Developments and Future Directions" (10/15/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

 

Affiliation:
Swartz Center for Computational Neuroscience, INC, UCSD
http://sccn.ucsd.edu/~arno/

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract::

EEGLAB is a software environment developed by the Swartz Center for Computational Neuroscience at the University of California, San Diego, running on the very broadly established MATLAB platform to be a processing environment that can be applied to all major EEG hardware configurations and that provides a broad palette of the most advanced analysis procedures for research in this increasingly exciting functional brain imaging modality. A survey of 687 research respondents has reported EEGLAB to be the software environment most widely used for electrophysiological data analysis, worldwide, by a wide margin (neuro.debian.net/survey/2011/results.html). In this presentation I will highlight recent developments to the EEGLAB software environment, such as how to perform statistics on collection of single trials across subjects and future directions such as hierarchical statistical analysis using general linear models for group analysis.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

 

Spring 2015

Ruth Williams: "Slowly oscillating periodic solutions for stochastic DDEs with positivity constraints" (06/04/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract::"Slowly oscillating periodic solutions for stochastic DDEs with positivity constraints"

Dynamical system models with delayed feedback, state constraints and small noise arise in a variety of applications in science and engineering. Under certain conditions oscillatory behavior has been observed. Here we consider a prototypical fluid model approximation for such a system --- a one-dimensional delay differential equation with non-negativity constraints. We explore conditions for the existence, uniqueness and stability of slowly oscillating periodic solutions of such equations. We illustrate our findings with simple examples from Internet rate control and gene regulation.

Based on joint work with David Lipshutz.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Frank Fernandez: "Dealing with Uncertainty: DARPA's New Paradigm for the 21st Century" (05/28/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title: "Dealing with Uncertainty: DARPA's New Paradigm for the 21st Century"

Bio:

Dr. Frank Fernandez was Director of the Defense Advanced Research Projects Agency (DARPA), the central R&D organization of the Department of Defense, from 1998 to 2001. He was a member of the Chief of Naval Operations (CNO) Executive Panel from 1983 until his appointment at DARPA. In this capacity, he provided advice to the CNO on a variety of issues. Currently, Dr. Fernandez is Chairman of the Naval Research Advisory Committee (NRAC), a committee chartered by law to advise the Secretary of the Navy on critical R&D issues. He is also a member of the Department of Homeland Security Science and Technology Advisory Panel, reporting to the Undersecretary for Science and Technology.

Dr. Fernandez received his Bachelor of Science in Mechanical Engineering and Master of Science in Applied Mechanics from Stevens Institute of Technology in New York, 1960-1961; and his Ph.D. in Aeronautics from California Institute of Technology in 1969. He was a Distinguished Research Professor in Systems Engineering and Technology Management at Stevens Institute of Technology in Hoboken, New Jersey.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Stephen Robinson:Estimating Phasic and Sustained Dynamic Information Transfer in the Human Brain (05/21/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
MEG Core Facility, National Institute of Mental Health
http://kurage.nimh.nih.gov/meglab/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Estimating Phasic and Sustained Dynamic Information Transfer in the Human Brain

A bivariate nonlinear and nonparametric dynamical measure of directional information transfer is described that is suitable for analyzing electrophysiological signals such as magnetoencephalography (MEG), electroencephalography (EEG), and electrocorticography (ECoG). This analysis, "temporo-dynamic symbolic transfer entropy" (tdSTE), was applied to a representative MEG recording of a normal control subject while performing a working memory (n-back) task. A simultaneous linearly constrained minimum variance (LCMV) beamformer was used to estimate the source waveforms at nine selected brain locations. The tdSTE analysis was then applied to pairs of source waveforms, estimating both their forward and reverse directional information flow. The transfer entropy (TE) time-series were then averaged relative to the stimulus markers, either stimuli or responses, for each of the n-back tasks. The tdSTE analysis was evaluated for higher frequencies, above 50 Hz, avoiding the confound of lower frequency rhythms and emphasizing multi-unit cortical activity (MUA). The experimental tdSTE results reveal the presence of both sustained and phasic (event-related) components. The magnitude of the sustained components was much larger than their associated phasic components. Furthermore, we observed that the participation of information exchange between regions in each of the n-back tasks was encoded in the relative magnitudes of their sustained components. This was observed under condition that the TE for each n-back condition was based upon the probability distribution functions (PDFs) computed a priori from the corresponding blocks of data for the 0, 1, and 2-back trials. When PDFs were derived from the cumulative data of all three n-block tasks, little or no difference between 0, 1, and 2-back was observed. These results were validated against a variant of sequence shuffled, "surrogate" data, showing that tdSTE can reliably estimate directional information flow from the MEG data of single, individual subjects.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Ying Wu: Insights Into Insight: What EEG Reveals about Problem Solving Across Multiple Domains(05/07/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Swartz Center for Computational Neuroscience, INC
http://sccn.ucsd.edu/~ywu/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:Insights Into Insight: What EEG Reveals about Problem Solving Across Multiple Domains

Problems can be solved in a variety of ways. One might systematically evaluate a known space of possible solutions until the right one is found. Alternatively, it may prove necessary to enlarge or restructure the expected problem space – so called "thinking outside the box." This approach can yield an experience of unexpected insight or feeling of Aha!. Whereas the subjective suddenness of an "Aha!" moment may lead to the impression that insight must be precipitated by a set of discrete, short-lived neural events, I will present evidence that even before a problem is presented, scalp-recorded measures of resting or baseline brain states are linked with future performance and likelihood of experiencing insight during the search for a solution. Additionally, I will show that compared to more systematic problem solving approaches, insight is accompanied by differences in cortical and likely cognitive engagement that are detectable throughout much of the problem solving phase, rather than being confined to a distinct interval immediately preceding the dawn of a solution.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Victor Minces: Role of Neuromodulators and Neural Correlations in Network Encoding (04/09/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
UCSD Cognitive Science
Temporal Dynamics of Learning Center
http://tdlc.ucsd.edu


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Role of Neuromodulators and Neural Correlations in Network Encoding

A fundamental variable in understanding the relationship between brain activity and sensory processing is the coding efficiency, or how much information about a set of stimuli a neuronal pool represents. Coding efficiency depends on the information represented by the individual neurons (associated with their signal to noise ratios), but also on the statistical dependencies among neurons (associated with their correlated activity); the influence of the latter becomes more important as the size of the neural pool under consideration is larger. I present a novel, simple way to estimate the encoding efficiency of neuronal pools in terms of signal to noise ratios and pairwise correlations. This approach allows exploration of the role of neuronal correlations in shaping coding efficiency. I apply this formulation to experimental data gathered from the visual cortex of the awake mouse, and show that neuromodulator acetylcholine shapes neural correlations in a manner that is compatible with enhanced encoding efficiency, learning, and attention.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

 

Winter 2015

Douglas A. Nitz: Cell Assemblies of the Basal Forebrain (03/12/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Dept. of Cognitive Science, UCSD
http://dnitz.com/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Cell Assemblies of the Basal Forebrain

Cortically-projecting basal forebrain neurons play a critical role in learning and attention, and their degeneration accompanies age-related impairments in cognition. Despite the impressive anatomical and cell-type complexity of this system, currently available data suggest that basal forebrain neurons lack complexity in their response fields, with activity primarily reflecting only macro-level brain states such as sleep and wake, onset of relevant stimuli and/or reward obtainment. The current study examined spiking activity of basal forebrain neuron populations across multiple phases of a selective attention task. Clustering techniques applied to the full population revealed bursting and non-bursting subtypes as well as a number of distinct categories of task-phase-specific activity patterns. Distinct population firing-rate vectors defined each task phase and most categories of task-phase-specific firing had counterparts with opposing firing patterns. Finally, among all subtypes of simultaneously recorded basal forebrain neurons, co-activity patterns evidenced grouping of neurons into cell assemblies whose spiking activity was optimally synchronized at a beta frequency (~20 Hz). Thus, consistent with known anatomical complexity, basal forebrain population dynamics are capable of differentially modulating their cortical targets over beta-frequency time intervals and according to the unique sets of environmental stimuli, motor requirements, and cognitive processes associated with different task phases.

 

Biography: Douglas Nitz received his PhD from UCLA in 1995 working primarily on brainstem mechanisms of rapid-eye-movement sleep production. As a post-doctoral student at the University of Arizona, he turned his attention to the problem of determining how single neurons and the ensemble activity patterns they compose map spatial relationships between an organism and its environment. This work continued at the Neurosciences Institute in San Diego where he worked between 1998-2008. Nitz joined UCSD's Department of Cognitive Science in 2008 and continues to work on neural mechanisms for spatial cognition and its translation into decisions and actions. The basal forebrain work to be presented is the outgrowth of a new research project undertaken with Andrea Chiba, also of the UCSD Cognitive Science Department.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Bradley Voytek: Cognitive Networks and the Noisy Brain (03/05/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
UCSD Cognitive Science, Neurosciences, and INC
http://darb.ketyov.com/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Cognitive Networks and the Noisy Brain

Perception, cognition, and social discourse depend upon coordinated neural activity. This coordination operates within noisy, overlapping, and distributed neural networks operating at rapid timescales. These networks are built upon a structural scaffolding with intrinsic neuroplasticity that changes with development, aging, disease, and personal experience. While the exact mechanisms for interregional communication are unknown, there is increasing evidence that oscillatory local field synchronization between neuronal groups facilitates communication at specific phases of the preferred oscillatory frequency. Successful interregional communication may rely upon the transient synchronization between distinct low frequency (< 80 Hz) oscillations, allowing for brief windows of communication via phase-coordinated local neuronal spiking. However such a communication scheme would be susceptible to small perturbations in spiking rate, probability, and/or synchronization. I will explore the consequences of this theory in terms of understanding cognition and a variety of neurological and psychiatric disorders.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Zeynep Akalin Acar: High-Resolution EEG Source Imaging (02/26/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
UCSD INC Swartz Center for Computational Neuroscience
http://sccn.ucsd.edu/~zeynep/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

High-Resolution EEG Source Imaging

Accurate electroencephalographic (EEG) source localization requires a forward electrical head model incorporating accurate conductivity values for the major head tissues. While consistent values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both measurement method and inter-subject differences. In simulations, mis-estimation of skull conductivity produce source localization errors as large as 31 mm (Akalin Acar and Makeig 2013). In this presentation, I will describe a gradient-based iterative source conductivity and localization estimation (SCALE) approach for estimating head tissue conductivities and spatial brain source distributions simultaneously in a magnetic resonance (MR) head image-derived head model based on scalp maps of near-dipolar sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. I will show validations using simulated data, and applications on real EEG data from two adults and babies. The ability to accurately estimate skull conductivity non-invasively from recorded EEG data itself, in combination with an electrical head model derived from a subject anatomic MR head image, could remove a barrier to using EEG as a cm-scale accurate 3-D functional cortical imaging modality.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Emre Neftci: Neuromorphic Cognition (02/19/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
INC and BCI, UCSD
http://isn.ucsd.edu/~emre/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Neuromorphic Cognition

Our ability to evoke intelligent processing on artificial neural systems goes hand in hand with a confluence of neuroscience, machine learning and engineering. I will describe recent advances in neuromimetic inference and learning algorithms that address this challenge from a neuromorphic systems perspective. These algorithms range from finite state machines synthesized with neural models of working memory, attention and action selection for solving cognitive tasks; to the learning of probabilistic generative models with models of stochastic sampling and plasticity in spiking neural networks. These advances form the groundwork for a domain-specific language for probabilistic models that can be compiled against neural substrates. Combined with state-of-the-art neuromorphic electronic hardware, this framework will provide a unique technology for studying the processes of the mind at multiple levels of investigation.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Massimiliano Di Ventra: Memcomputing: Computing with and in Memory Using Collective States (02/12/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Massimiliano Di Ventra
Department of Physics, UCSD
http://physics.ucsd.edu/~diventra/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Massimiliano Di Ventra: Memcomputing: Computing with and in Memory Using Collective States

I will discuss a novel computing paradigm we named memcomputing [1] inspired by the operation of our own brain which uses (passive) memory circuit elements or memelements [2] as the main tools of operation. I will first introduce the notion of universal memcomputing machines (UMMs) as a class of general-purpose computing machines based on systems with memory. We have shown [3] that the memory properties of UMMs endow them with universal computing power--they are Turing-complete--, intrinsic parallelism, functional polymorphism, and information overhead, namely their collective states can support exponential data compression directly in memory. It is the presence of collective states in UMMs that allows them to solve NP-complete problems in polynomial time using polynomial resources. As an example I will show the polynomial-time solution of the subset-sum problem implemented in a simple hardware architecture that uses standard microelectronic components [4]. Even though we have not proved NP=P within the Turing paradigm, the practical implementation of these UMMs would represent a paradigm shift from present von Neumann architectures bringing us closer to brain-like neural computation [5].

 

[1] M. Di Ventra and Y.V. Pershin, Computing: the Parallel Approach, Nature Physics, 9, 200 (2013).
[2] M. Di Ventra, Y.V. Pershin, and L.O. Chua, Circuit Elements with Memory: Memristors, Memcapacitors, and Meminductors, Proc. IEEE, 97, 1717 (2009).
[3] F. L. Traversa and M. Di Ventra, Universal Memcomputing Machines, IEEE Transactions on Neural Networks and Learning Systems, (in press), arXiv:1405.0931.
[4] F. L. Traversa, C. Ramella, F. Bonani, and M. Di Ventra, Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states, arXiv:1411.4798
[5] F. L. Traversa, F. Bonani, Y.V. Pershin and M. Di Ventra, Dynamic Computing Random Access Memory, Nanotechnology 25, 285201 (2014).

 

Bio: Massimiliano Di Ventra obtained his undergraduate degree in Physics summa cum laude from the University of Trieste (Italy) in 1991 and did his PhD studies at the Ecole Polytechnique Federale de Lausanne (Switzerland) in 1993-1997. He has been Research Assistant Professor at Vanderbilt University and Visiting Scientist at IBM T.J. Watson Research Center before joining the Physics Department of Virginia Tech in 2000 as Assistant Professor. He was promoted to Associate Professor in 2003 and moved to the Physics Department of the University of California, San Diego, in 2004 where he was promoted to Full Professor in 2006. Di Ventra's research interests are in the theory of electronic and transport properties of nanoscale systems, non-equilibrium statistical mechanics, DNA sequencing/polymer dynamics in nanopores, and memory effects in nanostructures for applications in unconventional computing and biophysics. He has been invited to deliver more than 200 talks worldwide on these topics (including 6 plenary/keynote presentations, 7 talks at the March Meeting of the American Physical Society, 5 at the Materials Research Society, 2 at the American Chemical Society, and 1 at the SPIE). He serves on the editorial board of several scientific journals and has won numerous awards and honors, including the NSF Early CAREER Award, the Ralph E. Powe Junior Faculty Enhancement Award, fellowship in the Institute of Physics and the American Physical Society. He has published more than 140 papers in refereed journals (13 of these are listed as ISI Essential Science Indicators highly-cited papers of the period 2003-2013), co-edited the textbook Introduction to Nanoscale Science and Technology (Springer, 2004) for undergraduate students, and he is single author of the graduate-level textbook Electrical Transport in Nanoscale Systems (Cambridge University Press, 2008).

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Ning Lan: Corticospinal Computation of Sensorimotor Control for Normal and Abnormal Movements(02/05/2015)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:

Institute of Rehabilitation Engineering
Med-X Research Institute
School of Biomedical Engineering
Shanghai Jiao Tong University

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Corticospinal Computation of Sensorimotor Control for Normal and Abnormal Movements

Evidence in human motor behaviors suggests that separate motor modules are used for control of movement and posture in the central nervous system (CNS). Each contains private central programming and corticospinal pathway of motor commands to the spinal alpha and gamma motoneurons (MNs). Abnormal motor behaviors, such as tremor in patients with Parkinson's disease (PD), demonstrate the similar feature of modularity. In this presentation, I will discuss a combined behavioral and computational approach to understanding the corticospinal computation of sensorimotor control for both normal and abnormal movements. A modular control model for movement and posture is proposed based on the dual spinal alpha-gamma sensorimotor system. In this study, we ask these fundamental questions. How can the alpha-gamma sensorimotor system implement modular control? And what is the computational role of propriospinal neurons (PN) in modular control of movements (both normal and abnormal)? Simulated model behaviors capture kinematic and EMG features of reach-and-hold human movements. Furthermore, the modular control model is able to predict pathological behaviors of action tremor in essential tremor (ET) patients and resting (or posture) tremor in PD patients. These results suggest a computational gating function of PN network for transmission and processing descending motor commands (both normal and abnormal), and support the hypothesis that modular control of posture and movement can be achieved with the dual alpha-gamma sensorimotor system.

Bio: Professor Ning Lan obtained the B.S. degree in Precision Instruments from Shanghai Jiao Tong University (SJTU) in 1982, and Ph.D. degree in Biomedical Engineering from Case Western Reserve University (CWRU) in 1989. Before joining SJTU, he was on the faculty in Biokinesiology and Physical Therapy of University of Southern California. Currently, he serves as a guest associate editor of Frontiers in Computational Neuroscience of the Nature Publishing Group, and is on the editorial board of ISRN Computational Biology, and Physical Medicine and Rehabilitation - International. He also serves as the Founding Deputy Director of The Strategic Alliance for Research and Development of Rehabilitation and Assistive Technologies for Medical Industries in China. He was one of the founding members of Neural Engineering Committee of the Chinese Society of Neuroscience, and served the founding depute director from 1995 to 1999. From 1997 to 2001, he served as Assistant Editor of IEEE Transactions on Rehabilitation Engineering (now IEEE Transactions on Neural Systems and Rehabilitation Engineering), and Associate Editor of Chinese Journal of Rehabilitation Theory and Practice from 1997-1999. He organized 1st, 2nd and 3rd International Conference on Rehabilitation Medical Engineering (CRME) in Shanghai, China in 2012, 2013 and 2014. His research interests are in neural electrical stimulation, neuromodulation for patients with Parkinson's disease, stroke and spinal cord injury, and neural and computational modeling of movement control.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

 

Fall 2014

Ratnesh Lal: Nanoscale engineering mediating neural function and activity (12/04/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:

MAE, Bioengineering and CNME/IEM
http://lal.eng.ucsd.edu/

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Nanoscale engineering mediating neural function and activity

Coordinated activity of ion channels and receptors in brain cells control electrical and chemical signal transduction and their synaptic transmission mediating normal brain activity and pathologies. Current emphasis of the BRAIN Initiative has been to design enabling technology to understand ensemble brain activity. Defining nanoscale (< 10 nm) structural conformations of ion channel/receptors mediating brain activity (though essential for controlling intricate brain connectivity) is unappreciated and yet these nanostructures would ultimately be driving any remedial paradigm(s) resulting from the functional mapping initiative. Unfortunately, there aren't many techniques to image 1-10 nm biological structures in liquid. We have been developing an array-atomic force microscope (AFM) integrated with functional analytical tools (e.g., electrical conductance measurement, FRET, TIRF), each individual AFM consisting of an array of conducting cantilevered probes with self-sensing and actuation capabilities. The new AFM-array will enable 1) imaging the synaptic network at the scales of its organization, nano-to-macro scale, 2) measuring localized electrical and chemical activity, and 3) interfacing with animal and human subjects. This novel technology will allow for force controlled imaging of live neural cells at multiple locations simultaneously with independent imaging feedback. Integration of an ion sensing tip on the cantilevers will allow for localized and highly parallel electrical recording of synaptic activity. This technology will enhance our understanding of how synaptic networks mediate global neural communication.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Conor Heneghan: Advances in measurement of sleep (11/13/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
University College Dublin, and ResMed
http://www.resmed.com/

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Advances in measurement of sleep

Despite the fact that we spend nearly one third of our lives asleep, surprisingly little was known about sleep until the 20th century. Now, sleep medicine is firmly established as a significant branch of medical practice, taking its roots strongly from the work of Nathaniel Kleitman and colleagues at the University of Chicago in the 1950s. The field progressed in the 1960s, with an increasing standardization of physiological signal recording that led to the current standard for sleep measurement—the polysomnogram (PSG). Recently, there has been continued interest in developing sleep measurement technologies that can provide useful information about sleep, over multiple nights, and with minimal interference to the subject. One technology that shows a lot of promise in this area is radio-frequency (RF) biomotion sensing of sleep. For the last several years, our research team has focused on producing a noncontact RF biomotion sensor, which is practical for use in home and lab-based sleep measurement. Our goal has been to simplify the process of sleep and respiration measurement, allowing continuous monitoring over multiple nights—permitting individuals to understand their own sleep patterns or enabling medical professionals to provide improved care and guidance to individuals suffering from a number of sleep and respiratory disorders. We have developed algorithms that can map the movement signal into useful information about sleep and respiration. In studies where the sensor and algorithm are compared with the gold-standard PSG measurements, the noncontact system agrees with the sleep/wake classification of the PSG more than 85% of the time. This is comparable with the best actigraphy systems. Moreover, since the system can measure respiratory effort, it can be used to identify apnea and hypopnea events with a good degree of accuracy. In a study of 74 subjects suspected of having sleep apnea, the noncontact sensor system was 90% sensitive and 92% specific in recognizing patients with and without sleep apnea, using the standard cutoff of an Apnea Hypopnea Index greater than 15 to define sleep apnea. The ongoing challenge is to further improve the accuracy and sensitivity of the technology and, ideally, to add in further information without compromising the convenience and noninvasiveness of the overall system from a user's point of view.

 

Reference:
Conor Heneghan, "Wireless Sleep Measurement: Sensing Sleep and Breathing Patterns Using Radio-Frequency Sensors," IEEE EMBS Pulse Magazine, September 21, 2014.
http://pulse.embs.org/september-2014/wireless-sleep-measurement/

 

Biography:
Conor Heneghan, PhD, is Chief Engineer with ResMed's Strategy and Ventures Group, and Adjunct Associate Professor at University College Dublin School of Electrical, Electronic and Communications Engineering. He received his PhD in Electrical Engineering from Columbia University, New York in 1995, and was co-founder of BiancaMed, a pioneer in non-contact sleep measurement which was acquired by ResMed in 2011. His research interests are biomedical signal processing and analysis, particularly focused in the areas of sleep, cardiovascular and respiratory disorders.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Joaquin Rapela: Characterizing Neural Ensembles from High-Resolution Physiological Recordings (10/30/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Swartz Center for Computational Neuroscience, UCSD
http://sccn.ucsd.edu/~rapela


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Characterizing Neural Ensembles from High-Resolution Physiological Recordings

If we observe a fluid at the molecular level we see random motions, but if we look at it macroscopically we may see a smooth flow. An intriguing possibility is that by analyzing brain activity at a macroscopic level, i.e., at the level of neural ensembles, we may discover patterns not apparent at the single-neuron level, that are as useful as velocity or temperature are to understand, and predict, the motion of fluids. Several models have been developed to simulate the activity of ensembles of neurons, but only now, with the availability of high-resolution neural recordings, it is possible to accurately estimate parameters in these models from physiological data, and learn from these parameters how ensembles represent information in the brain. In this talk I will describe methods that we are developing to characterize neural ensembles from electrophysiological recordings, and comment on two applications of these methods that we are currently pursuing.

I will show how starting from a model of single neuron of a given type (e.g., Hodgkin and Huxley) it is possible to derive accurate dynamical models of ensembles of homogeneous neurons of the given type. We call these models ensemble density models or EDMs. EDMs are high-dimensional nonlinear dynamical models. To facilitate the estimation of state variables and parameters in large networks of EDMs from physiological data, we derived a method that significantly reduced the dimensionality in EDMs, with minor degradation of approximation power. We are using a faster maximum-likelihood method for the estimation of connectivity parameters in networks of EDMs, and an MCMC algorithm that approximates the expected value, as well as higher moments, of both states and connectivity parameters, conditioned on observed data. I will outline two applications of these methods: 1) the study of the role of connectivity among neural ensembles for the control of vocal articulators during speech production, using high-resolution ECoG recordings in humans; and 2) the estimation of ensemble receptive fields in sensory cortices.

We want to apply these tools to characterize diverse ensemble electrophysiological recordings. If you have these type of recordings, and you may want to analyze them at the ensemble level, please contact the speaker at rapela@ucsd.edu.

 

Reference: J. Rapela, M. Kostuk, P. Rowat, T. Mullen, K. Bouchard, and E. Chang, "Characterizing Neural Activity at the Ensemble Level," IEEE EMBS BRAIN Grand Challenges Conference, Washington DC, Nov. 13-14, 2014. Available at http://sccn.ucsd.edu/~rapela/cbam/brainGrandChallenges14.pdf

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

H.-S. Philip Wong: Nanoscale Electronic Synapses for Brain-Inspired Computing (10/16/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Department of Electrical Engineering and Stanford SystemX Alliance
Stanford University
http://nano.stanford.edu/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Nanoscale Electronic Synapses For Brain-Inspired Computing

Unlike classical enterprise computing that operates on structured, digital data, 21st century information technology (IT) must process, understand, classify, and organize the vast amount of data in real-time. Applications of such will be dominated by machine-learning kernels operating on Tbytes of active data with little data locality. At the same time, massively redundant sensor arrays sampling the world around us will give humans the perception of additional "senses" blurring the boundary between biological, physical, and cyber worlds. The challenge is to manage the resulting data deluge; e.g., processing 10^14 floating-point operations per second using 1 W between the retina and the brain, or a neural map yielding data at 1 Tbit/sec. Processing such data in wearable devices clearly demands computation well beyond the state of the art.

As information technology become pervasive in society and ubiquitous in our lives, the desire for always-on, always-available, embedded everywhere, and human-centric information systems calls for a different computation paradigm.

In this talk, I will describe the use of nanoscale electronic devices that emulate the functions of the biological synapse. The goal is to develop hardware technologies for brain-inspired computing and electronic emulation of the brain. Phase-change memory is employed to demonstrate the spike-timing-dependent plasticity (STDP) behavior of the biological synapse. A small array of such devices is connected in a recurrent Hopfield network to perform pattern recognition tasks and the tradeoff between variation tolerance and the speed/energy performance of the network is studied. The use of metal-oxide resistive switching memory (RRAM) presents another exciting opportunity. The stochastic nature of the physics of resistive switching enables RRAM to serve as analog weights in a neural network. It is possible to tune the RRAM to introduce randomness for hyper-dimensional computation for robust processing of perceptual data. I will describe on-going collaborative efforts to demonstrate in hardware small and medium-scale system applications using electronic synapses integrated with CMOS neurons.

 

References:

S. B. Eryilmaz, D. Kuzum, R. Jeyasingh, S. Kim, M. Brightsky, C. Lam, H.-S. P. Wong, "Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array," Frontiers in Neuroscience, 8:205 (2014). doi: 10.3389/fnins.2014.00205


D. Kuzum, S. Yu, H.-S. P. Wong, "Synaptic Electronics: Materials, Devices and Applications," Nanotechnology, 24. 382001, 2013. doi:10.1088/0957-4484/24/38/382001


S. Yu, B. Gao, Z. Fang, H. Yu, J. Kang, H.-S. P. Wong, "Stochastic Learning in Oxide Binary Synaptic Device for Neuromorphic Computing," Frontiers in Neuroscience, vol. 7, article 186, pp. 1–9, October 31, 2013. doi: 10.3389/fnins.2013.00186


S. Yu, B. Gao, Z. Fang, H. Yu, J. Kang, H.-S. P. Wong, "Stochastic Learning in Oxide Binary Synaptic Device for Neuromorphic Computing," Advanced Materials, Volume 25, Issue 12, pages 1774–1779, March 25, 2013.


D. Kuzum, R.G.D. Jeyasingh. S. Yu, H.-S. P. Wong, "Low-Energy Robust Neuromorphic Computation Using Synaptic Devices," IEEE Trans. Electron Devices, vol. 59, issue 12, pp. 3849–3894 (2012). DOI: 10.1109/TED.2012.2217146

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Jeffrey L. Krichmar:CARL-SJR: CARL-SJR: A Socially Assistive Robot With Rich Tactile Sensory Interaction (10/02/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Department of Cognitive Sciences and Department of Computer Science
University of California, Irvine
http://www.socsci.uci.edu/~jkrichma/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

CARL-SJR: A SOCIALLY ASSISTIVE ROBOT WITH RICH TACTILE SENSORY INTERACTION

Research studies show that children with Autism Spectrum Disorders (ASD) or Attention Deficit Hyperactivity Disorders (ADHD) respond well to robot artifacts and suggest that robots nicely fitting into the goals of Sensory Integration Theory (SIT) might be a form of therapy for children with ASD or ADHD. SIT is intended to focus directly on the neurological processing of sensory information as a foundation for learning of higher-level (motor or academic) skills. Treatment goals center on improving sensory processing to either (a) develop better sensory modulation as related to attention and behavioral control, or (b) integrate sensory information to form better perceptual schemas and practical abilities as a precursor for academic skills, social interactions, or more independent functioning. To aim these goals, we present a novel neuromorphic robot that interacts with users through touch sensing and visual signaling on its whole surface. Our robot, which is called the Cognitive Anteater Robotics Laboratory – Spiking Judgment Robot (CARL-SJR), has a convex, hemispheric shell containing a matrix of trackballs for sensing touch and LEDs for communication with users. Currently CARL-SJR is in the prototype stage. It rides on Roomba for mobility and incorporates a spiking neural network (SNN) modeling somatosensory cortex. We explore tactile sensory decoding through rate coding and temporal coding. We also compare the performance of the two coding schemes for classifying different tactile inputs from hand movements. Our evaluation of the network's ability to categorize hand movements shows both rate and temporal coding performed well. The results could guide us to build a sophisticated spiking neural network to achieve treatment goals through learning, adapting, and shaping users' behaviors.

Joint work with Liam D. Bucci and Ting-Shuo Chou

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Spring 2014

Mark McDonnell: Impact of Stochastic Vesicle Variability on Spiking in the Peripheral Auditory System (06/19/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Computational and Theoretical Neuroscience Laboratory
Institute for Telecommunications Research
University of South Australia
http://www.itr.unisa.edu.au/ctnl


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Impact of Stochastic Vesicle Variability on Spiking in the Peripheral Auditory System

Synaptic vesicle release is known to be governed by stochastic biophysical processes. This manifests as random 'noisy' variations in post-synaptic current, and stochastic post-synaptic spiking patterns. The probability of vesicle release can change over time, resulting in short-term plasticity effects such as depression and facilitation. Well-known phenomenological models characterise these effects in, for example, cortical pyramidal neurons.

However, the influence of stochastic synaptic dynamics on neuronal spiking is nowhere more stark than in the peripheral auditory system. For example, many auditory nerve fibers spike 'spontaneously' at high rates (100 spikes per second) in the absence of acoustical stimulation. Unlike cortical neurons, these nerve fibers receive synaptic input from ribbon synapses in inner-hair cells, which exhibit time-continuous graded responses to sounds rather than discrete spiking. Intra-cellular calcium dynamics in inner-hair cells is likely to strongly influence vesicle-release.

In this talk I will describe preliminary work on introducing short-term depression and calcium channel noise into models of inner hair-cell synaptic dynamics. The objective of this work is to extend existing models so that they accurately capture both long-term and short-term spike correlations observed in experimental recordings from auditory nerve fibers.

Auditory nerve fibers send their spikes to cells in the cochlear nucleus, some of which also exhibit stochastic short term plasticity. I will also briefly describe how for such cells the number of parallel incoming synapses interact with short-term depression to cause varying phase shifts in post-synaptic spiking in response to periodically-modulated pre-synaptic spiking.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Osonde Osoba:"Noise-benefits in Backpropagation Training."(06/05/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Ming Hsieh Department of Electrical Engineering
University of Southern California
osondeos@usc.edu


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

"Noise-benefits in Backpropagation Training."

The talk will present recent work that shows how careful noise injection can speed up the convergence of the popular backpropagation training algorithm for feedforward neural networks. This result is based on prior work that showed how careful noise injection speeds up the convergence of Expectation-Maximization (EM) algorithms for maximum-likelihood estimation with missing or corrupted data. The crucial link is the new fact that the backpropagation algorithm is a special case of a generalized EM algorithm. Other special cases of noise-boosted EM include the popular k-means clustering algorithm used in big-data processing and the Baum-Welch algorithm used to train hidden Markov models. The noise boosting also extends to speeding up the extensive training involved in using convolutional neural networks (CNNs) for image classification. The following link provides an implementation of the noise-boosted backpropagation training algorithm for CNNs: http://sail.usc.edu/~audhkhas/software/NCNN.zip

 

Bio: Osonde Osoba is a postdoctoral researcher at the Signal and Image Processing Institute at the University of Southern California (USC). He is also an instructor at USC's Viterbi School of Engineering. He received his PhD in Electrical Engineering from USC in August 2013 under the advisement of Prof. Bart Kosko. His dissertation was on "Noise Benefits in Expectation-Maximization Algorithms." He has interned at RAND and Intel where he worked on stochastic optimization algorithms and machine learning. He was a Ming Hsieh Institute Ph.D. scholar, a National GEM fellow, and an Annenberg fellow.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Manuel Hernandez: Towards an Understanding of the Neural Mechanisms Underlying Human Postural Control (05/29/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Poizner Laboratory, Institute for Neural Computation
http://inc.ucsd.edu/poizner/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Towards an Understanding of the Neural Mechanisms Underlying Human Postural Control

Falls are a significant cause of mortality and serious injury in older adults and particularly in people with neurological disorders, such as Parkinson's disease. The ability to maintain balance and postural control is commonly evaluated using center of pressure (COP) data. Methods such as the Stabilogram Diffusion Analysis have examined the stochastic characteristics of the COP but require numerous, long duration trials for reliable measures. To further our understanding of the underlying dynamical processes in postural control, a new conceptual framework for studying human postural control using the COP velocity autocorrelation function is proposed and its results are compared to Stabilogram Diffusion Analysis. This work suggests a concise and reliable measure of postural control that may further our understanding of the underlying mechanisms behind balance dysfunction in neurological populations and provide a tool for quantifying future neurorehabilitative interventions aimed at improving balance.

 

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Stephen E. Robinson: MEG Comparisons of Shared Information Among Schizophrenic Patients, Their Unaffected Siblings and Normal Controls (05/22/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Core MEG Facility, NIMH Bethesda, MD
http://kurage.nimh.nih.gov/meglab/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

MEG Comparisons of Shared Information Among Schizophrenic Patients, Their Unaffected Siblings and Normal Controls

A brief introduction to non-linear dynamical measures such as entropy and mutual information will be given, followed by how these can be applied to MEG. Previous MEG studies, using a working memory task (n-back) have shown differences among schizophrenics, their unaffected siblings, and normal subjects in beta-band event related desynchronization in dorsolateral prefrontal cortex and parietal cortex. This agrees closely with findings in fMRI. Symbolic mutual information (SMI) is a pair-wise measure of shared information between brain regions. Applying SMI to the same datasets in a 50-300 Hz bandpass shows the most significant differences in shared information among the groups are found in rostral prefrontal cortex. Furthermore, these results appear to be independent of task or memory work-load. Further studies are needed to determine the sensitivity and specificity of this measure, and to investigate cofactors such as medication and gender differences.

 

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Sadique Sheik: Role of Mismatch in Neuromorphic Engineering (05/15/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
BioCircuits Institute, UCSD
http://biocircuits.ucsd.edu/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Role of Mismatch in Neuromorphic Engineering

Neuromorphic analog integrated circuits built to mimic biological spiking neurons and synapses involve large numbers of transistors, capacitors, and other components. Inaccuracies in the fabrication lead to variability in the sizing of these integrated components and their electrical properties, resulting in mismatch, e.g. no two identically designed transistors are truly identical. Transistor mismatch directly impacts the collective dynamics of multiple identically designed neural elements integrated on neuromorphic chips. In this chalk talk I will discuss some of the implications of transistor mismatch and other fabrication induced component variability on neuromorphic engineering, and some of the strategies adopted to tackle such variability. I will further show that some computational models can actually exploit variability to enhance their performance. I will discuss one such model that I have been working on - unsupervised learning of spatiotemporal spike patterns. I will conclude by sharing my thoughts on the kind of computational models that we, as a community, should be working towards, in order to build robust cognitive systems.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Stuart Anstis:I Thought I Saw it Move: Illusions of Movement (05/01/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:

UC San Diego, Department of Psychology
http://anstislab.ucsd.edu/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

I Thought I Saw it Move: Illusions of Movement

Motion perception has been called one of the most ancient and primitive forms of vision (Walls 1942). Animals depend upon it to catch their next meal, or to avoid being another animal's next meal. We use it every day when we drive, although cars move ten times faster than we can run, yet our motion perception and reaction times have not speeded up to match, and this can lead to death. So it is important to know our perception's can's and cant's.

I have developed a number of new motion illusions. These produce perceptual errors that throw light on the normal processes of motion perception. The illusions vary the Contrast, Context, Size, Object-Parsing, Ambiguity and Retinal Eccentricity of moving objects. In the Footsteps illusion, static background stripes alter the contrast of moving colored squares, which makes their apparent speed vary (think: driving in the fog). In the Flying Bugs illusion, a moving background alters the perceived direction in which circling bugs fly (think: moon appears to sail behind moving clouds). In the Zigzag illusion, drifting random dots appear to move in new directions when we walk toward the screen, showing that Size matters. In the Chopsticks illusion, sliding intersections that are circling clockwise appear to move counterclockwise; and our eyes are quite unable to track this circling movement. I shall also show ambiguous patterns of regularly spaced moving spots, which appear to re-group in real time even though the stimulus remains the same, so we can watch our own visual computations in action. Also, certain moving striped patterns are correctly seen in central vision, but dramatically change their perceived directions when seen eccentrically (out of the corner of your eye). This reveals that the fovea and peripheral retina handle visual motion quite differently. Finally, moving patterns can shift the perceived position of flashed targets, showing interactions in how we see position and motion.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Thorsten O. Zander: Passive Brain-Computer Interfaces for Automated Adaptation and Implicit Control in Human-Computer Interactio (04/17/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Technische Universität Berlin
http://www.phypa.org/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Passive Brain-Computer Interfaces for Automated Adaptation and Implicit Control in Human-Computer Interactio

Over the last 3 decades several means of interaction with Brain-Computer Interfaces (BCIs) have been extensively investigated. While most research aimed at the design of supportive systems for severely disabled persons, over the last decade a new trend has emerged towards applications for the general population. For users without disabilities a specific type of BCIs, that of passive Brain-Computer Interfaces (pBCIs), has shown high potential for improving Human-Machine and Human-Computer Interaction (HCI).

With pBCIs a new type of interaction has emerged, based on implicit control. Implicit Interaction aims at controlling a computer system by behavioral or psychophysiological aspects of user state, independently of any intentionally communicated commands. This introduces a new type of HCI, which in contrast to most currently implemented forms of interaction does not require the user to explicitly communicate with the machine. Users can focus on understanding the current state of the system and developing strategies for optimally reaching the goal of the given interaction. Based on information extracted by a pBCI and the given context, the system can adapt automatically to the current strategies of the user. Principles of Implicit Interaction in pBCI and its applications to HCI are illustrated with results of an EEG-based study to guide simple cursor movements on a 2D grid to a target.

 

Biography:

Thorsten Zander is trained in mathematics with a focus on mathematical logic, and studied Brain-Computer Interfaces (BCI) in the group of Klaus-Robert Mueller at the Fraunhofer FIRST in Berlin. He currently leads Team PhyPA at the Department for Biological Psychology and Neuroergonomics at the Technical University of Berlin, introducing passive BCI and investigating applications of its means of interaction for healthy users. Among several research collaborations he worked extensively with Scott Makeig at the Swartz Center for Computational Neuroscience investigating cognitive processes underlying passive BCI, and more recently with Bernhard Schoelkopf on new methodologies for passive BCIs.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Winter 2014

Paul Sadja: "Your Eyes Give You Away: Pupillary responses, EEG Dynamics and Applications for BCI" (03/13/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Laboratory for Intelligent Imaging and Neural Computing
Columbia University
http://liinc.bme.columbia.edu


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Your Eyes Give You Away: Pupillary responses, EEG Dynamics and Applications for BCI

As we move through an environment, we are constantly making assessments, judgments, and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions -- our implicit "labeling" of the world. In this talk I will describe our work using physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3-D environment.

Specifically, we record electroencephalographic (EEG), saccadic, and pupillary data from subjects as they move through a small part of a 3-D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to those that are labelled. Finally, the system plots an efficient route so that subjects visit similar objects of interest.

We show that by exploiting the subjects' implicit labeling, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers' inference of subjects' implicit labeling. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3-D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user's interests.

 

Biography:

Paul Sajda is Professor of Biomedical Engineering and Radiology at Columbia University and Director of the Laboratory for Intelligent Imaging and Neural Computing (LIINC). His research focuses on neural engineering, neuroimaging, computational neural modeling and machine learning applied to image understanding. Prior to Columbia he was Head of The Adaptive Image and Signal Processing Group at the David Sarnoff Research Center in Princeton, NJ. He received his B.S. in Electrical Engineering from MIT and his M.S. and Ph.D. in Bioengineering from the University ofPennsylvania. He is a recipient of the NSF CAREER Award, the Sarnoff Technical Achievement Award, and is a Fellow of the IEEE and the American Institute of Medical and Biological Engineering (AIMBE). He is also the Editor-in-Chief for the IEEE Transactions in Neural Systems and Rehabilitation Engineering and a member of the IEEE Technical Committee on Neuroengineering. He has been involved in several technology start-ups and is a co-Founder and Chairman of the Board of Neuromatters, LLC., a neurotechnology research and development company.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Nima Bigdely Shamlo: Integration of EEG Source Dynamics in and Across Studies (02/27/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Swartz Center for Computational Neuroscience
http://sccn.ucsd.edu


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Integration of EEG Source Dynamics in and Across Studies

In this talk I present a set of methods that enable the calculation of EEG source dynamics at the subject level and analyses of this information in and across studies. I explore different methods to extract better EEG measures from individual subjects: regression to reduce confounds originated from temporal proximity of cognitive events, optimal low-pass filtering to calculate better ERPs and collaborative averaging to obtain better measures from small numbers of trials. I also introduce two methods for combining source-based EEG information, calculated with ICA and equivalent dipole localization, across subjects in a study: Measure Projection Analysis (MPA) allows study-level analysis for measures, such as ERP and ERSP, that are associated with single brain areas while Network Projection Analysis enables combining network measures, such as effective connectivity, associated with an ordered pair of brain area. ---

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Alexander Terekhov: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies (02/20/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Laboratory of Psychology of Perception, Paris Descartes University (Paris 5)
http://lpp.psycho.univ-paris5.fr


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies

The brain sitting inside its bony cavity sends and receives myriads of sensory inputs and outputs. A problem that must be solved either in ontogeny or phylogeny is how to extract the particular characteristics within this "blooming buzzing confusion" that signal the existence and nature of physical space, with structured objects immersed in it, among them the agent's body. The idea that spatial knowledge must be extracted from the sensorimotor flow in order to underlie perception has been considered by a number of thinkers, including Helmholtz, Poincare, Nicod, Gibson, etc. However, little work has considered how this could actually be done by organisms without a priori knowledge of the nature of their sensors and effectors. Here we show how an agent with arbitrary sensors will naturally discover spatial knowledge from the undifferentiated sensorimotor flow. The method first involves tabulating sensorimotor contingencies, that is, the laws linking sensory and motor variables. Second, further laws are created linking these sensorimotor contingencies together. The method works without any prior knowledge about the structure of the agent's sensors, body, or of the world. We show that the extracted laws endow the agent with basic spatial knowledge, manifesting itself through perceptual shape constancy and the ability to do path integration. We further show that the ability of the agent to learn all spatial dimensions depends on the ability to move in all these dimensions, rather than on possessing a sensor that has that dimensionality. This latter result suggests, for example, that three dimensional space can be learned in spite of the fact that the retinas are two-dimensional. We conclude by showing how the acquired spatial knowledge paves the way to building the notion of object.

 

Joint work with J. Kevin O'Regan ERC FEEL Project:

http://lpp.psycho.univ-paris5.fr/feel/

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Katja Lindenberg: Non-Equilibrium Thermodynamics (01/30/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Department of Chemistry and Biochemistry
http://hypatia.ucsd.edu/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

A variety of simple model systems provide a theoretical testbed for a thorough characterization of the efficiency of operation of thermodynamic systems at maximum power (i.e., away from equilibrium) and also for the characterization of fluctuations in small thermodynamics systems in a non-equilibrium steady states. These models are particularly attractive because they can be explored analytically. Starting with idealized single quantum dot devices we will present a variety of such systems in a variety of operational modes. Our goal is to understand universal properties beyond the linear response regime.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Tony Bell: Learning And Energetics In Dynamical Systems (01/16/2014)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Redwood Center for Theoretical Neuroscience, UC Berkeley
http://redwood.berkeley.edu/wiki/Tony_Bell


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

In this "real" chalk talk I will present new results and work in progress on (1) likelihood-based machine learning in dynamical systems; (2) entropy production in dynamical systems; and (3) possible connections between the three hitherto separate domains of machine learning, dynamical systems and non-equilibrium statistical mechanics. I will also present a survey of the concepts that we need to integrate to create an ambitious synthesis of these fields.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

 

Fall 2013

Vikash Gilja: Towards Clinically Viable Neural Prosthetic Systems (11/21/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Department of Electrical and Computer Engineering, UCSD
http://www.ece.ucsd.edu


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Brain-machine interfaces (BMIs) translate neural activity into control signals for guiding prosthetic devices, such as computer cursors and robotic limbs, offering disabled patients greater interaction with the world. BMIs have recently demonstrated considerable promise in proof-of-concept animal experiments and in human clinical trials. However, a number of challenges for successful clinical translation remain, including system performance and robustness across time and behavioral contexts.

In this talk I will address these challenges by describing two classes of BMI experiments. For the first class of experiments, I will describe a study with rhesus monkeys and the recent translation of study results to a human participant. In these experiments we record from neurons in motor cortex using chronically implanted electrode arrays and focus on control algorithm design. Through real-time closed-loop BMI experiments we demonstrate methods that increase performance and improve robustness. In the second class of experiments, we develop and verify a set of novel wireless neural recording systems, enabling the study of neural activity for longer time periods and across more complex behaviors.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Geert Schmid-Schönbein: Autodigestion: A Basis for Inflammation and Disease (11/14/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Department of Bioengineering, UCSD
http://microcirculation.ucsd.edu/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Autodigestion: A Basis for Inflammation and Disease

There is increasing evidence that markers for inflammation accompany virtually all diseases, including stroke and chronic neuronal and retinal degenerative diseases. Inflammation is fundamentally a tissue repair mechanism, and thus the question arises what is the cause of tissue injury in conditions that lead to inflammation. I will discuss this fundamental question in the case of shock, sepsis and multi-organ failure. Shock kills hundreds of thousands of people each year in the US alone and there is no treatment other than alleviation of symptoms. The markers for inflammation in shock are severe and in short order lead to cell and organ failure. The cause is currently unknown in spite of many ideas put forward, e.g. an involvement of intestinal bacteria and their toxins, secondary products (e.g. cytokines, complement) they generate, depletion of metabolites.

Even old observations and studies indicated that in critically ill patients the intestine plays a central role. Hippocrates states: "Disease begins in the gut". You should ask yourself the question: How is it possible that you can digest for example a sausage, whose skin is made of intestine, you digest this intestine but do not digest your own intestine? How did nature solve this problem?

The powerful digestive enzymes synthesized by the pancreas are transported and fully activated in the intestine as part of normal food digestion. They need to be compartmentalized inside the lumen of the intestine as requirement for normal digestion. Containment of digestive enzymes in the lumen of the intestine is provided by the mucosal barrier. This barrier is made up of a layer of mucin and the intestinal epithelium, and has usually low permeability for digestive enzymes. But should the mucosal barrier break down in shock, the digestive enzymes leak into the wall of the intestine and start an autodigestion process, causing extensive tissue damage. The digestive enzymes also generate small molecular weight cytotoxic mediators, which together with digestive enzymes are transported into the systemic circulation via the portal venous system, the intestinal lymphatics and even through the peritoneum. The mixture of digestive enzymes and their fragments cause cell and organ dysfunction even in remote organs to the point of complete cell death and organ failure. We have demonstrated that blockade of digestive enzymes in the lumen of the intestine in experimental forms of shock serves to reduce breakdown of the mucosal barrier and autodigestion of the intestine, organ dysfunctions and mortality.

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Tim Mullen: Real-Time Modeling,Classification, and 3D Visualization of Neuronal Source Dynamics and Connectivity using High-Density Wearable EEG (10/31/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Swartz Center for Computational Neuroscience, INC, UCSD
http://sccn.ucsd.edu/wiki/SIFT


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Real-Time Modeling,Classification, and 3D Visualization of Neuronal Source Dynamics and Connectivity using High-Density Wearable EEG

Dynamic cortico-cortical interactions are central to neuronal information processing. The ability to monitor these interactions in real time may prove useful for Brain-Computer Interface (BCI) and other applications, providing information not obtainable from univariate measures, such as bandpower and evoked potentials. Wearable (mobile, unobtrusive) EEG systems likewise play an important role in BCI applications, affording data collection in a wider range of environments. However, reliable real-time modeling of neuronal source dynamics in mobile settings faces challenges, including mitigating artifacts and maintaining fast computation and good modeling performance with limited amount of data. Furthermore, prediction of mental and behavioral states from high-dimensional spatio-spectro-temporal connectivity parameters poses additional challenges. Here we describe recent efforts to address these challenges using novel developments in wearable hardware, signal processing, and machine learning. We hope this will ultimately contribute to the development of EEG as a mobile neuroimaging modality.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Roozbeh Jafari: Brain Computer Interface: An Embedded Signal Processing Perspective (10/17/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
University of Texas at Dallas
http://www.essp.utdallas.edu


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract: Brain Computer Interface: An Embedded Signal Processing Perspective

Most clinical, wellness, and entertainment applications of BCI require wearable and portable devices. The enhanced wearability of the BCI system, along with the user's comfort and quality of experience play an important role in adopting this new technology for various applications. The next generation of BCI systems will benefit from cross-layered optimization techniques spanning from electrode and analog front-end (AFE) optimization to hardware architecture exploration, signal processing and BCI paradigm development all targeted towards enhancing the system usability. In this talk, we will highlight several techniques developed for electrode optimization and noise reduction using an AFE assisted feedback. We will discuss BCIBench, a benchmarking suite which includes a wide range of algorithms used for pre-processing, feature extraction and classification in BCI applications. We will provide insights into architectural components that can enhance the performance and reduce the power consumption of BCI systems. We will discuss several BCI signal processing techniques that can benefit from tight coupling with the AFE. We will present a number of novel BCI paradigms that enhance the transfer rate over classic paradigms. We will conclude the talk by highlighting the need for system-level and holistic approaches enhancing the performance and the usability of the next generation BCI systems.

 

Biography:

Roozbeh Jafari is an associate professor at UT-Dallas. He received his PhD in Computer Science (UCLA) and completed a postdoctoral fellowship at UC-Berkeley. His research interest lies in the area of wearable computer design and signal processing. His research has been funded by the NSF, NIH, DoD (TATRC), AFRL, AFOSR, DARPA, SRC and industry (Texas Instruments, Tektronix, Samsung & Telecom Italia). He has published over 100 papers in refereed journals and conferences. He has served as technical program committee chairs for several flagship conferences in the area of Wireless Health and Wearable Computers including the ACM Wireless Health 2012, International Conference on Body Sensor Networks 2011 and International Conference on Body Area Networks 2011. He is an associate editor for the IEEE Sensors Journal and IEEE Internet of Things Journal. He is the recipient of the NSF CAREER award (2012) and the RTAS 2011 best paper award.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Terry Sejnowski: Connecting the dots on the brain initiative (10/03/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Salk Institute for Biological Studies
Institute for Neural Computation, UCSD
http://cnl.salk.edu/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Connecting the dots on the brain initiative

 

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

 

Spring 2013

Shaya Fainman: nanophotonics technology and applications (06/06/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
UCSD Department of Electrical and Computer Engineering
http://emerald.ucsd.edu/

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

nanophotonics technology and applications

Various future system applications that involve photonic technology rely on our ability to integrate it on a chip to augment and/or interact with other signals (e.g., electrical, chemical, biomedical, etc.). For example, future computing and communication systems will need integration of photonic circuits with electronics and thus require miniaturization of photonic materials, devices and subsystems. Another example, involves integration of microfluidics with nanophotonics, where former is used for particle manipulation, preparation and delivery, and the latter in a large size array form parallel detection of numerous biomedical reactions useful for healthcare applications. To advance the nanophotonics technology we established design, fabrication and testing tools. The design tools need to incorporate not only the electromagnetic equations, but also the material and quantum physics equations to include near field interactions. These designs are integrated with device fabrication and characterization to validate the device concepts and optimize their performance. Our research work emphasizes the construction of passive (e.g., engineered composite metamaterials, filters, etc.) and active (e.g., nanolasers) components on-chip, with the same lithographic tools as electronics. In this talk, we discuss some of the passive metamaterials and devices that recently have been demonstrated in our lab. These include our most recent results on monolithically integrated short pulse compressor utilized with SOI material platform and design, fabrication and testing of nanolasers constructed using metal-dielectric-semiconductor resonators confined in all three dimensions.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Henry Abarbanel: Nervous Systems From The Bottom Up (05/23/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Department of Physics, UCSD and
Scripps Institution of Oceanography

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Nervous Systems From The Bottom Up

Methods for transferring information from experiments to models have been given an exact statistical physics setting. Using this framework we analyzed data from experiments on individual neurons. We will discuss ideas for extending this to experiments on networks, now being designed for execution in the Margoliash laboratory at the University of Chicago.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Miroslav Krstic: Extremum Seeking and Learning in Adversarial Networks (05/09/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Associate Vice Chancellor for Research
Director, Cymer Center for Control Systems and Dynamics
Daniel L. Alspach Endowed Chair in Dynamic Systems and Control
http://flyingv.ucsd.edu/

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Extremum Seeking and Learning in Adversarial Networks

Extremum seeking (ES) is a method for real-time non-model based optimization, though it can also be viewed as a form of data-based (black box) learning. ES was invented in 1922 but the past decade has actually been its golden age, both in terms of the development of theory and in terms of penetration in industry and into fields outside of control engineering. An extremum seeker is actually a dynamical system whose state is the parameter vector with which the optimization is being conducted. ES researchers work on designing such dynamical systems and on studying their convergence (typically in continuous time, using averaging theory). An extremum seeker uses only the measurement of the performance index (without knowing the functional dependence of the performance index on the parameter vector) and employs perturbation signals - either periodic or stochastic - in the process of learning (similar to "mutations" in genetic algorithms). After a historical overview, I will present recent ES designs that provably converge to Nash equilibria in noncooperative games. As I will illustrate, extremum seeking is a natural way to explain how E. coli or fish seek food - the former using stochastic perturbations and the latter using deterministic perturbations. In other words, ES reverse engineers the feedback algorithms used by such organisms.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

David Kleinfeld: Coupled Brainstem Sensorimotor Oscillators (04/25/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Section of Neurobiology and Department of Physics, UCSD
http://physics.ucsd.edu/neurophysics/

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

*** NOTE SPECIAL TIME ***

Time: 2:30pm-3:30pm

 

Title/Abstract:

Coupled Brainstem Sensorimotor Oscillators

Whisking and sniffing are predominant aspects of exploratory behaviour in rodents. Yet the neural mechanisms that generate and coordinate these and other orofacial motor patterns remain largely uncharacterized. We use anatomical, behavioural, electrophysiological and pharmacological tools to show that whisking and sniffing are coordinated by respiratory centres in the ventral medulla. We delineate a distinct region in the ventral medulla that provides rhythmic input to the facial motor neurons that drive protraction of the vibrissae. Neuronal output from this region is reset at each inspiration by direct input from the pre-Bötzinger complex, such that high-frequency sniffing has a one-to-one relationship with whisking, whereas basal respiration is accompanied by intervening whisks that occur between breaths. We conjecture that the respiratory nuclei, which project to other premotor regions for oral and facial control, function as a master clock for behaviours that coordinate with breathing. Work with Martin Deschenes and Jeffrey Moore.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Eric Halgren: Cortical Dynamics Of Word Understanding (04/11/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Department of Neurosciences, and
Multimodal Imaging Laboratory, UCSD
http://mmil.ucsd.edu/


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Cortical Dynamics Of Word Understanding

Despite 150 years of scientific investigation, fundamental issues in word understanding remain unresolved. For example: Is acousto-phonetic processing affected by the lexico-semantic context (i.e., does expecting a particular word bias how we transform a sound into phonemes)? Do written words have to be re-coded phonologically before lexical access (i.e., do we have to mentally sound-out a word before we can understand it)? Does lexical access precede semantic encoding (i.e., do we first have to know what word it is before we can access its meaning)? These questions critically concern the dynamics of neural information processing, which can be observed non-invasively with magnetoencephalography (MEG), as well as invasively with local field potential and single unit recordings in patients. I will argue that these data indicate that the answers to the questions posed above are: No, Maybe, and No.

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

 

Winter 2013

Sascha du Lac: Cerebellar Prediction and Learning Mechanisms and Implications(03/28/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Systems Neurobiology, Salk Institute
http://www.snl-d.salk.edu


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time:12:30pm-1:30pm

 

Title/Abstract:

Cerebellar Prediction and Learning Mechanisms and Implications

Success in a complex world requires learning, prediction, and action. The cerebellum of humans and other vertebrate animals contains over half of the brain's neurons, which are devoted to optimizing prediction and action over rapid timescales (< 500 msec). Remarkably, this vast computational power influences the rest of the brain solely via convergence of cerebellar Purkinje cell inhibitory synapses onto a relatively tiny number of neurons in downstream cerebellar and vestibular nuclei. In this seminar, I will discuss surprising new findings from our laboratory and others about microcircuits and mechanisms responsible for dynamically adaptive cerebellar control of cognition, physiological regulation, and movement.

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Tatyana Sharpee: Characterizing Neural Feature Selectivity And Invariance Using Natural Stimuli (03/14/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Computational Neurobiology Laboratory
Helen McLoraine Developmental Chair in Neurobiology
Salk Institute for Biological Studies
http://cnl-t.salk.edu


Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 1:30pm-2:30pm

 

Title/Abstract:

Characterizing Neural Feature Selectivity And Invariance Using Natural Stimuli

In this talk I will describe a set of computational tools for characterizing responses of high level sensory neurons. The goal is to describe in as simple as possible ways how the responses of these neurons signal the appearance of conjunctions of different features in the environment. The focus will be on computational methods that are designed to work with stimuli derived from the natural sensory environment. Some of the new methods that I will discuss characterize neural feature selectivity while assuming that the neural responses exhibit a certain type of invariance, such as position invariance for visual neurons. Other methods do not require one to make an assumption of invariance, and instead can determine the type of invariance by analyzing relationship between the multiple stimulus features that affect the neural responses. I will discuss the relative advantages and limitations of these computational tools and illustrate their performance using model neurons as well as recordings from the visual system.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Anthony Lewis: Locomotion, Perception, and Neurorobotic Models (02/28/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Qualcomm Inc.
http://www.qualcomm.com


Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Behavior is an expression of the interaction between the body, the brain and the environment. Neurorobotics provide a tool that can be used to model this interaction. In the neurorobotic paradigm, a biologically plausible model acts through a robotic body to interact with the world.

In this talk I will explore several themes centered around locomotion: Generation of locomotion using spiking neurons, learning to walk using global and local cost functions, incorporation of vision including stereopsis and optic flow to guide locomotion. I end with a presentation of a physical model of the lower limbs of a human including both mono-articulate and biarticular (acting on two joints) muscles, as well as load and position sensory feedback. This robot demonstrated how a relatively small network of spiking neurons and biologically realistic dynamics could yield remarkably human like gait.

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Joaquin Rapela: Predictive Modeling Physiological Ssytems: From Single Cells To Whole
Brains and Back (02/14/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Swartz Center for Computational Neuroscience, INC, UCSD
http://sccn.ucsd.edu/~rapela/


Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Predictive Modeling Physiological Ssytems: From Single Cells To Whole Brains and Back

Most existing techniques to characterize physiological systems from input/output data use simplistic models and estimate their parameters from mathematical convenient, but behaviorally not very relevant, inputs. However, recent increases in computational power and advances in statistics make now possible new techniques that use more complex models and estimate their parameters from richer stimuli. In this talk I will describe two such techniques. I will first introduce the Extended Projection Pursuit Regression algorithm (ePPR, Rapela et al. 2010) for the nonlinear characterization of response properties of single cells from high-dimensional stimuli with naturalistic (and correlated) statistics. I will present new results showing that ePPR reveals, for the first time, quadrature pairs of inhibitory filters in the responses to natural images of cortical complex cells in cat primary visual cortex. Most techniques to characterize single cells are predictive (i.e., the quality of the estimated models is determined by how well they can predict cell responses). However, the majority methods to characterize EEG data are NOT predictive, in spite of the rich behavioral data that could be predicted in EEG experiments. In the second part of this talk I will present the results of using a predictive technique, similar to ePPR, to characterize the brain dynamics of humans performing an audio-visual target-detection task. These results show a very high correlation between subjects behaviors (both error rates and reaction times) and modulation of alpha activity (both amplitude and phase) accounted by the predictive model.

This finding has interesting implications. Scientifically, it adds new supportive evidence to recent research on the link between alpha rhythms and behavior [Mathewson et al. 2011], and to recent theories relating alpha synchronization to top-down inhibitory control [Klimesch et al. 2007]. Methodologically, the non-linear, multi-variate and predictive model used in this work opens a new way to analyze EEG data and contributes a strong example to late applications of multi-variate predictive models to analyze EEG data [Pernet et. al 2011]. In addition, this finding has translational applications, since alpha power could be modulated as predicted by the model to improve subjects behaviors (using SSVEP or rTMS, as shown by [Mathewson et. al 2010] and [Hamidi et. al 2009], respectively).

Biography: Joaquin Rapela completed his undergraduate degree in Computer Science at the University of Buenos Aires, Argentina. After working at the IBM Almaden Research Center, San Jose, CA, as a Staff Software Engineer, he completed his PhD in Electrical Engineering at the University of Southern California. There he developed and applied signal processing tools to characterize responses of visual cells. He was jointly advised by Prof. Norberto Grzywacz (Neuroscience) and Prof. Jerry Mendel (Engineering). Since November 2010 Joaquin is working at the Swartz Center for Computational Neuroscience characterizing the brain dynamics of attention with EEG and those related to eye movements with EEG and eye tracking.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Zeynep Akalin Acar: Solving The Forward And Inverse Problem In EEG Source Analysis (01/17/2013)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Swartz Center for Computational Neuroscience, INC, UCSD
http://sccn.ucsd.edu/~zeynep/


Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Solving The Forward And Inverse Problem In EEG Source Analysis

Localization of the brain activities using EEG measurements is called electric source imaging (ESI). ESI is important for both clinical and in basic brain research. The solution of the scalp potentials for a specific dipole configuration is the forward problem of ESI. Complementarily, the inverse problem is the localization of the sources based on the measurements and the calculations. The three most important components of a successful source localization approach are: (a) an electric forward head model for the subject, (b) a ('source space') model of possible source locations, and (c) an inverse source localization method. In this talk, I will give brief definitions of forward and inverse EEG problem solutions and present our simulation studies based on realistic individual subject forward head models to investigate source localization errors produced by inaccuracies introduced by use of template head models, inaccurate skull conductivity estimates, imprecise electrode co-registration, and low electrode numbers. Results show that when individual subject MR head images are not available to construct subject-specific head models accurate EEG source localization should employ a four- or five-layer BEM template head model incorporating an accurate skull conductivity estimate and warped to 64 or more accurately 3-D measured and co-registered electrode positions.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Fall 2012

Angela Yu: Decisions, Decisions, Decisions! (11/29/2012)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Department of Cognitive Science, UCSD
http://www.cogsci.ucsd.edu/~ajyu


Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Decisions, Decisions, Decisions!

Decision theory is a powerful formal framework for understanding how noisy inputs can be translated into concrete actions. Using tools from Bayesian statistical inference and stochastic control theory, my work has shown that many behavioral and neural phenomena in perception, action, and cognition can be understood as rational decision-making by the brain at different timescales and levels of abstraction. In this talk, I will give an overview of my modeling and experimental work that uses decision-theoretic concepts to understand the formal link between neurophysiology and behavior in perception, attention, inhibitory control, and action planning.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

John R. Iversen: Neural Dynamics of BEAT Perception (11/15/2012)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
The Neurosciences Institute
http://www.nsi.edu/~iversen/


Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Neural Dynamics of BEAT Perception

Our perceptions are jointly shaped by external stimuli and internal interpretation. The perceptual experience of a simple rhythm, for example, strongly depends upon its metrical interpretation (how one hears the basic beat or 'pulse' of they rhythm). This pulse is endogenously generated and has important consequences for perception, underlying a fundamental mode of temporal perception. The beat sets the origin and timescale for the perception of rhythm, and has strong cognitive advantages for recognition and recall of patterns. Interestingly, the internal pulse is not uniquely determined by the input stimulus, and instead can be altered at will, providing a model of the voluntary cognitive organization of perception. Where in the brain do the bottom-up and top-down influences in rhythm perception converge? Is it purely auditory, or does it involve other systems? I will present ongoing work aimed at understanding the neural mechanisms responsible for beat perception and metrical interpretation. In one experiment, we measured brain responses as participants listened to a repeating rhythmic phrase, using magnetoencephalography. In separate trials, listeners were instructed to mentally impose different metrical organizations on the rhythm by hearing the downbeat at one of three different phases in the rhythm. The imagined beat could coincide with a note, or with a silent position (yielding a syncopated rhythm). Since the stimulus was unchanged, observed differences in brain activity between the conditions should relate to active rhythm interpretation. Two effects related to endogenous processes were observed: First, sound-evoked responses were increased when a note coincided with the imagined beat. This effect was observed in the beta range (20-30 Hz), consistent with earlier studies. Second, and in contrast, induced beta responses were decoupled from the stimulus and instead tracked the time of the imagined beat. The results demonstrate temporally precise rhythmic modulation of brain responses that reflect the active interpretation of a rhythm. In discussion will will consider our work in light of 'motor theories' of perception that posit a kind of analysis by synthesis. In the case of rhythm there is converging evidence for premotor activity when listening to rhythms with a beat in absence of overt movement, suggesting a role for 'covert action' in shaping our perceptions of timing in sound.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Bertram Shi: Joint D evelopment of Perception and Active Eye Movements (11/08/2012)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Department of Electronic and Computer Engineering and Division of
Biomedical Engineering Hong Kong University of Science and Technology
Clear Water Bay, Kowloon, Hong Kong
http://www.ee.ust.hk/~eebert/


Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

Joint Development of Perception and Active Eye Movements

Rather than explicitly programming a robot, might it be possible to seed a robot with a minimal structure and allow it to learn how to behave in the environment intelligently, much in the same way a baby develops? As a first step towards such a system, we must have models of the development of perception, the robot's internal representation of the environment within the robot based on its sensory input, and the development of behavior, the generation of intelligent actions based upon the perceived environment. Past work has studied these two problems in isolation. For example, it has been shown that a developmental algorithm based on sparse coding can account for the shape of receptive fields of visual neurons in the mammalian brain. Reinforcement learning has been used to model the development of behavior. However, this isolated viewpoint ignores the fact that behavior and sensory perception are mutually dependent. Sensory perception drives behavior, but behavior can also influence the development of sensory perception, by altering the statistics of the sensory input. Thus, there is a "chicken-and-egg" problem as to which arises first. Indeed, it is likely that they develop simultaneously. But how should these two learning processes interact? What constraints do we need to put into place to ensure that the learning succeeds in generating intelligent behavior? I will describe joint work with Jochen Triesch at the Frankfurt Institute of Advanced Study, which addresses these problems by modeling the joint development of visual perception and the control of eye movements. In particular, I describe our work in modeling the interaction between development of the neural representation of binocular disparity and the development of a binocular vergence eye movements control policy to maintain fixation.

 

Bio: Bertram E. Shi received the B.S. and M.S. degrees in electrical engineering from Stanford University, Stanford, CA, USA in 1987 and 1988. He received the Ph.D. degree in electrical engineering from the University of California, Berkeley, CA, USA in 1994. He then joined the faculty of the Department of Electronic and Computer Engineering at the Hong Kong University of Science and Technology, Kowlooon, Hong Kong. He is currently a Professor in the ECE department and the Division of Biomedical Engineering. His research interests are in bio-inspired signal processing and robotics, neuromorphic engineering, computational neuroscience, machine vision, image processing, and hardware implementations of neural networks. Prof. Shi is an IEEE Fellow and has twice served as Distinguished Lecturer for the IEEE Circuits and Systems Society. He is an Associate Editor for the IEEE Transactions on Biomedical Circuits and Systems, as well as the Frontiers in Neuromorphic Engineering.

 

Sponsored by:
Brain Corporation, http://www.braincorporation.com
Qualcomm Corporation, http://www.qualcomm.com

 

Christian Kothe: BCILAB and applications to EEG cognitive interfaces (11/01/2012)

Sponsor: Institute for Neural Computation Chalk Talk Series

Affiliation:
Swartz Center for Computational Neuroscience
Institute for Neural Computation, UCSD
http://sccn.ucsd.edu/wiki/BCILAB


Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 12:30pm-1:30pm

 

Title/Abstract:

BCILAB and applications to EEG cognitive interfaces

Chalk Talk Video #1 https://www.youtube.com/watch?v=w8Z3b_aftco
Chalk Talk Video #2 https://www.youtube.com/watch?v=YUB0vxNmm2w

With an increasingly deep understanding in neuroscience as well as disciplines such as statistical inference and optimization happening in parallel to rapid progress in sensor engineering and high-performance, yet low-cost, computation comes the ability to interface the human nervous system to the world of machines. In this chalk talk I will discuss the BCILAB toolbox, a MATLAB toolbox for the rapid design, prototyping and evaluation of EEG-based brain-computer interfaces and other types of cognitive interfaces, which at present is one of the most comprehensive such system in terms of number of methods implemented. Some of its key design choices and features will be explained in detail, as will be a small selection of state-of-the-art algorithms and applications enabled by those algorithms under favorable conditions. I conclude with a brief overview of the larger ecosystem in which BCILAB exists, including our new multi-modal data acquisition platform known as the lab streaming layer and with an outlook into future directions, such as the expansion into online connectivity measures and motion analysis via the SIFT and MoBILAB toolboxes, respectively.

 

Host: Scott Makeig, smakeig@ucsd.edu

Steve Grossberg: Laminar Cortical Dynamics Of Visual Perception, Attention, Recognition, And Consciousness (10/25/2012)

Sponsor: Institute for Neural Computation Chalk Talk Series, and Temporal Dynamics of Learning Center Seminar Series

Affiliation:
Wang Professor of Cognitive and Neural Systems
Center for Adaptive Systems, Center for Computational Neuroscience and
Neural Technology, and Departments of Mathematics, Psychology, and
Biomedical Engineering
Boston University, Boston, MA 02215
steve@bu.edu
http://cns.bu.edu/~steve


Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time:
Seminar: 12:30pm-1:30pm
Chalk talk/Q&A session: 1:30pm-2pm

 

Title/Abstract:

Laminar Cortical Dynamics Of Visual Perception, Attention, Recognition, And Consciousness

There has been a great deal of theoretical progress in clarifying how brains give rise to minds. This progress is illustrated by two new computational paradigms: Complementary Computing clarifies the nature of global brain specialization, whereas Laminar Computing clarifies why all neocortical circuits use variants of a shared layered architecture. Recent models of 3D vision and figure-ground separation, speech perception, and cognitive working memory and unitization all use variants of this laminar design. The talk will outline function roles of identified cells in visual cortex that help the brain to see. It will propose functional links that occur during category learning between brain processes of consciousness, learning, expectation, attention, resonance, and synchrony, and supportive behavioral and neurobiological data. The talk will suggest how a hierarchy of laminar cortical regions interact with specific and nonspecific thalamic regions during category learning using spiking dynamics, STDP, local field potentials, and synchronous oscillations. It will then propose how the brain learns to bind multiple views of an object into a view-invariant object category while scanning a scene with eye movements. In particular, how does the brain avoid the problem of erroneously binding views of different objects together during unsupervised learning conditions, and how do the eyes scan multiple object views even before we know what it is? This analysis predicts how processes of spatial attention, object attention, category learning, figure-ground separation, and predictive remapping in cortical areas V1, V2, V3A, V4, ITp, ITa, PPC, LIP, and PFC interact during invariant object category learning.

 

Hosts: Gary Cottrell, gcottrell@cs.ucsd.edu and Gert Cauwenberghs, gert@ucsd.edu

 

Emre Neftci: Synthesizing Cognition In Neuromorphic VLSI Systems (10/18/2012)

Sponsor: Institute for Neural Computation

Affiliation:
Integrated Systems Neuroengineering Laboratory, and
Institute for Neural Computation
UCSD

Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 1230 -1330

 

Title/Abstract:

Synthesizing Cognition In Neuromorphic VLSI Systems

The hallmark of cognitive behavior is the ability to make economically advantageous choices based not only on immediately available data, but also on the longer time-scale context in which the choice is embedded. In this chalk talk, I will present a method for specifying such behaviors on a physical substrate of inherently imprecise and noisy neuromorphic VLSI circuits. The method casts the target behavior as a "soft" state-machine that is configured on an abstract, computational layer, composed of subnets of spiking neurons. The neuronal subnets are recurrently connected and thereby able to support reliable processing through active gain, signal restoration, and multistability. The desired states and transitions of the high-level behavior can be easily programmed into the computational layer by introducing only sparse connections between some neurons of the various subnets. This abstract layer is realized on the hardware substrate of silicon neuron circuits using a mapping between the parameters of the layer's model neurons, and the bias voltages of the underlying analog-digital electronic circuits. The configuration method is applied to a real-time CMOS VLSI neuromorphic system that performs task-dependent classification of motion patterns contained in the spike-event data generated by a silicon retina.

 

Spring 2012

Ken Kreutz-Delgado: deep belief networks for reinforcement learning (06/21/2012)

Sponsor: Institute for Neural Computation

Affiliation:

Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 1230 -1330

 

Title/Abstract:

deep belief networks for reinforcement learning

 

 

Tarik S Bel-Bahar: affective neuroscience: meta-analytic findings (06/07/2012)

Sponsor: Institute for Neural Computation

Affiliation:
Swartz Center for Computational Neuroscience
http://sccn.ucsd.edu


Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 1230 -1330

 

Title/Abstract:

Affective Neuroscience: Meta-analytic Findings

We will begin with a brief review of major psychological models of emotion and their implications for cognitive-affective neuroscience. We will then move on to the primary findings from multiple recent brain imaging meta-analyses related to emotion, reward, emotional faces, and pain/empathy. The last half of the talk will consist of an open-ended discussion of the implications of the meta-analytic findings for research and theory.

 

 

Tom Bartol: How To Build A Synapse From Molecules, Membranes, And Monte Carlo Methods (05/24/2012)

Sponsor: Institute for Neural Computation

Affiliation:
Computational Neurobiology Laboratory
The Salk Institute for Biological Studies
http://cnl.salk.edu


Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 1230 -1330

 

Title/Abstract:

How To Build A Synapse From Molecules, Membranes, And Monte Carlo Methods

Biochemical signaling pathways are integral to the information storage, transmission, and transformation roles played by neurons in the nervous system. Far from behaving as well-mixed bags of biochemical soup, the intra- and inter-cellular environments in and around neurons are highly organized reaction-diffusion systems, with some subcellular specializations consisting of just a few copies each of the various molecular species they contain. For example, glutamtergic synapses at dendritic spines in area CA1 hippocampal pyramidal cells contain perhaps 100 AMPA receptors, 20 NMDA receptors, 10 CaMKII complexes, and 5 free Ca++ ions in the spine head. Much experimental data has been gathered about the neuronal signaling pathways involved in processes such as synaptic plasticity, especially recently, thanks to new molecular probes and advanced imaging techniques. Yet, fitting these observations into a clear and consistent picture that is more than just a cartoon but rather can provide biophysically accurate predictions of function has proven difficult due to the complexity of the interacting pieces and their relationships. Gone are the days when one could do a simple thought experiment based on the known quantities and imagine the possibilities with any degree of accuracy. This is especially true of biological reaction-diffusion systems where the number of discrete interacting particles is small, the spatial relationships are highly organized, and the reaction pathways are non-linear and stochastic. Here I will present how biophysically accurate computational experiments performed on cell signaling pathways can be a powerful way to study such systems and can help formulate and test new hypotheses in conjunction with bench experiments. MCell is a Monte Carlo simulator designed for the purpose of simulating exactly these sorts of cell signaling systems. I will introduce fundamental concepts of cell signaling processes in the organized and compact spaces of synapses, and the insights that can be gained through building realistic models of neurotransmission.

 

Brendan Allison:The "B" of BCIs: How Cognitive Neuroscience Matters With P300 and Other
BCIs (05/10/2012)

Sponsor: Institute for Neural Computation

Affiliation:
Laboratory of Brain-Computer Interfaces
TU Graz, Austria
http://bci.tugraz.at/


Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 1230 -1330

 

Title/Abstract:

The "B" of BCIs: How Cognitive Neuroscience Matters With P300 and Other BCIs

In a (classically defined) brain-computer interface (BCI), users must perform voluntary mental tasks that each entail distinct patterns of brain activity. Hence, for cognitive neuroscientists, ongoing challenges include identifying, modifying, and testing these mental tasks. This talk will discuss this general challenge and then describe specific examples with one common type of BCIs called the P300 BCI.

 

Stefan Leutgeb:Spatial Processing and Map Learning in the Entorhino-Hippocampal Circuit (04/26/2012)

Sponsor: Institute for Neural Computation

Affiliation:
Section of Neurobiology
Division of Biological Sciences
http://biology.ucsd.edu/faculty/sleutgeb.html

Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 1230 -1330

Title/Abstract:

Spatial Processing and Map Learning in the Entorhino-Hippocampal Circuit

My laboratory is interested in identifying neuronal mechanisms for long-term memory storage at the systems level. Because specialized hippocampal circuitry is necessary for many forms of memory, we investigate the computations that are performed in a local circuitry that consists of entorhinal inputs to hippocampus and hippocampal outputs to entorhinal cortex. In particular, our research asks which mechanisms generate hippocampal spatial firing patterns and how spatial firing patterns contribute to spatial memory. The input layers of the medial entorhinal cortex to hippocampus contain many cell types with precise spatial firing patterns, including cells with grid-like spatial firing patterns (i.e., grid cells). We found that silencing the neuronal activity in the medial septal area abolishes theta oscillations and grid-like firing patterns in entorhinal cortex. Even though precise spatial and temporal firing patterns in entorhinal cortex and hippocampus are disrupted, we found that the spatial firing patterns of hippocampal cells are partially retained after septal inactivation. We therefore asked whether septal input to entorhinal cortex is particularly important for generating new spatial maps of the environments. We find that the formation of new spatial maps is disrupted to a substantially larger extent than the retention of familiar maps. These findings have important implications for understanding how neurodegenerative processes in the entorhinal cortex can result in a failure to appropriately organize neuronal activity and synaptic plasticity, and thus in the memory problems that are characteristic for Alzheimer's disease.

 

Gary Cottrell: sparse PCA image encoding learns ganglion cell responses (04/12/2012)

Sponsor: Institute for Neural Computation

Affiliation: CSE, UC San Diego

Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 

Time: 1230 -1330

Title/Abstract:

sparse PCA image encoding learns ganglion cell responses


 

 

Winter 2012

Tim Mullen: Spatiotemporal Modeling Of Cortical Source Dynamics And Interactions During Epileptic Seizure (03/29/2012)

Sponsor: Institute for Neural Computation

Affiliation:
Swartz Center for Computational Neuroscience
Institute for Neural Computation, UCSD
http://www.antillipsi.net/

 

Time: 1230 -1330

Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 


Title/Abstract:

"Spatiotemporal Modeling Of Cortical Source Dynamics And Interactions During Epileptic Seizure"

Mapping the dynamics and spatial topography of brain source processes critically involved in initiating and propagating seizure activity is critical for effective epilepsy diagnosis, intervention, and treatment. In this report we analyze neuronal dynamics before and during epileptic seizures using adaptive multivariate autoregressive (VAR) models applied to maximally-independent (ICA) sources of intracranial EEG (iEEG, ECoG) data recorded from subdural electrodes implanted in a human patient for evaluation of surgery for epilepsy. We examine the spatial distribution on the cortical surface of causal sources and sinks of ictal activity using a novel combination of multivariate Granger-causality and graph theoretic metrics, and distributed multi-scale source localization using Sparse Bayesian Learning. Evidence from this analysis reveals multiple distinct ictal stages corresponding to shifts in inter-component spatiotemporal dynamics and connectivity structure in or near clinically-identified epileptic foci before, during, and following seizures.

Jessie Peissig: Show Me Your Poker Face: Recognizing Emotional Expressions (03/15/2012)

Sponsor: Institute for Neural Computation

Affiliation:
Department of Psychology, California State University, Fullerton
http://psych.fullerton.edu/jpeissig/

Time: 1230 -1330

Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 


Title/Abstract:

"Show Me Your Poker Face: Recognizing Emotional Expressions"

I will discuss a database of faces showing genuine emotional expressions that we have collected and are working on validating. I will also present two studies that have used those faces, including one study comparing males and females and a second study looking at poker players.

 

Host: Gary Cottrell, gary@eng.ucsd.edu

 

Leanne Chukoskie: Movement Matters: Investigating Eye Movements And Dyspraxia In Autism (03/01/2012)

Sponsor: Institute for Neural Computation

Affiliation:
Computational Neurobiology Laboratory, Salk Institute
http://www.snl.salk.edu/~leanne/

Time: 1230 -1330

Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 


Title/Abstract:

"Movement Matters: Investigating Eye Movements And Dyspraxia In Autism"

The literature on looking behavior of individuals with autism is extensive, as is the literature on spatial attention differences in autism. Yet, we lack an understanding of the way in which lower level visual, motor and attentional mechanisms contribute to the biases in looking behavior often observed in individuals with autism. Similarly, although there is evidence for deficits in overall motor coordination in autism, this work has not been extended to include eye movement. To our knowledge, there have been no attempts to compare motor control of eye movement with gross motor coordination and ability to perform skilled gestures (praxis). These functions are of particular developmental importance, as early sensory and motor abilities provide a scaffold for higher level skills such as social communication. If eye movements are inaccurate or slow, social information is lost along with the opportunity to learn from that particular social situation.

Using a battery of tasks, we studied the interactions among eye movement, visual motor integration, visual perception and both fine and gross motor skills. We examined associations between various aspects of the tasks to test whether atypical looking behavior observed in natural settings might be affected by fundamental visual motor deficits. We tested children with autism spectrum disorders (ASD) and typically developing age- and performance IQ-matched school-aged children who were recruited from an existing sample of children enrolled in studies of neural and cognitive development.

I will describe the significant group differences we found in several tasks as well as correlations in performance across eye and body movement, as well as in perceptual tasks. Taken together, these results suggest a fresh perspective that may explain some of the difficulties observed with eye contact and visual search often found in individuals with ASD.

 

 

Jianxia Cui: Control of dynamics of excitable networks (02/16/2012)

Sponsor: Institute for Neural Computation

Affiliation:
BioCircuits Institute, UCSD
http://biocircuits.ucsd.edu/

Time: 1230 -1330

Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 


Title/Abstract:

"Control of dynamics of excitable networks"

The spatiotemporal dynamics of neuronal systems remains a challenging and important topic in theoretical neuroscience. To understand complex dynamics, it is necessary to start from controllable systems, such as excitable chemical systems and small neuronal networks. In my talk, I will begin with the control of spatiotemporal dynamics of photosensitive Belousov-Zhabotinsky (BZ) systems. Due to their amenability to experimental control and theoretical analyses, photosensitive BZ systems have been serving as ideal model systems in advancing our understanding of complex networks. I will then cover the dynamics of two-neuron networks consisting of one spiking biological neuron and one computational model neuron coupled via dynamic clamp. I will introduce phase response curves (PRCs) that are used to analyze and predict dynamics of these small networks. Finally, I will propose an experimental design to map the synaptic connections among different types of neurons in real neuronal networks on neocortical slices, based on measured spatiotemporal dynamics. In the proposed study, two-photon laser scanning microscopy will be used to record cellular calcium dynamics of the networks, which will be controlled by 2-photon photostimulation uncaging techniques.

 

Host: Gabriel Silva, gsilva@ucsd.edu

 

Claudia Lainscsek: Probing Epilepsy In Human Cortex With Delayed Differential Equations (01/26/2012)

Sponsor: Institute for Neural Computation

Affiliation:
CNL, Salk Institute, and
Institute for Neural Computation, UCSD
http://cnl.salk.edu/

Time: 1230 -1330

Location:

San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html

 


Title/Abstract:

"Probing Epilepsy In Human Cortex With Delayed Differential Equations"

Time series analysis with nonlinear delay differential equations (DDEs) is a very powerful tool since it reveals spectral as well as topological properties of the underlying dynamical system. Here DDEs are used to identify different regimes in ECoG (Electrocorticography) data. Electrocorticography is the practice of using electrodes placed directly on the exposed surface of the brain to record electrical activity from the cerebral cortex. ECoG is currently considered to be the "gold standard" for defining epileptogenic zones in clinical practice. A general form for the DDEs relates the derivative at a data point to previous data points of the signal. The linear terms of such a DDE correspond to the main frequencies in the signal. For n independent frequencies in the signal 2n − 1 linear terms are needed. The nonlinear terms in the DDE are related to nonlinear couplings between the harmonic signal parts. DDEs can also be re-written as functions of dynamical higher order data correlations. These dynamical higher order data correlations can be seen as generalizations of Nth order data moment functions such as e.g. the auto-correlation (2nd order moment) and the bi-correlation (3rd order moment). Comparing both versions of higher order data correlations can reveal useful information when analyzing non-linear data. The DDE framework therefore can be seen as a time-domain analysis tool akin to Fourier analysis that is highly robust against noise contamination and computationally fast.

In multichannel epilepsy ECoG data the nonlinear parts of the signal are of special interest. A simple nonlinear two-term DDE can be used to reliably identify artifacts as well as seizures by a large model error and be clearly distinguished by applying ICA to the DDE outputs. Such an analysis can also reveal the seizure onset channels of each seizure. The DDE ouputs further show the three different stages present while a seizure is happening, and post-seizure states.

 

Host: Peter Rowat, peter@pelican.ucsd.edu

 

Lars Kai Hansen: Sparse Non-Linear Denoising Of fMRI Data (01/19/2012)

Sponsor: Institute for Neural Computation

Affiliation:
Director, THOR Center for Neuroinformatics
Head of Section Cognitive Systems
DTU Informatics, Technical University of Denmark
http://www.imm.dtu.dk/~lkh

Time: 1230 -1330

Location:

Swartz Center for Computational Neuroscience
SDSC East Building, EB185, UCSD


Title/Abstract:

"Sparse Non-Linear Denoising Of fMRI Data"

We investigate non-linear denoising of functional brain images by kernel principal component analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as "the pre-image problem". Since the feature space mapping is
typically not bijective, pre-image estimation is inherently illposed. In many applications, including functional magnetic resonance imaging (fMRI) data it is of interest to denoise a sparse signal. To meet this objective we investigate sparse pre-image reconstruction by a Lasso type regularization. We find that sparse estimation provides better brain state decoding accuracy and a more reproducible pre-image. These two important metrics are combined in an evaluation framework which allow us to optimize both the degree of sparsity and the non-linearity of the kernel embedding. The latter result provides evidence of signal manifold non-linearity in the specific fMRI case study.

TJ Abrahamsen, LK Hansen. Sparse non-linear denoising: Generalization performance and pattern reproducibility in functional MRI. Pattern Recognition Letters 32(15):2080–2085 (2011).

PM Rasmussen, TJ Abrahamsen, KH Madsen, LK Hansen. Nonlinear denoising and analysis of neuroimages with kernel principal component analysis and pre-image estimation. NeuroImage, in minor revision (2012).

 

Host: Scott Makeig, smakeig@ucsd.edu

 

Fall 2011

Mikhail Rabinovich: Cognitive information dynamics (12/08/2011)

Sponsor: Institute for Neural Computation

Affiliation:
UCSD BioCircuits Institute
http://biocircuits.ucsd.edu/rabin/

Time: 1230 -1330

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html


Title/Abstract: (Download talk via PDF file here...)

"Cognitive information dynamics"

The analysis of the temporal evolution of brain information is crucially important for the understanding of higher cognitive mechanisms in normal and pathological states. From the perspective of information dynamics, we will discuss working memory capacity, binding phenomena and some other functions of brain activity. In contrast with the classical description of information theory, brain information dynamics deals with problems such as the stability/instability of information flows, their quality, the timing of sequential processing, the top-down cognitive control of perceptual information, and information creation. In this framework, different types of information flow instabilities correspond to different cognitive disorders. On the other hand, the robustness of cognitive activity is related to the control of the information flow stability. We discuss these problems using experimental, computational and theoretical approaches, and we argue that cognitive activity is better understood considering information flows in the phase space (in contrast to physical–brain space) of the corresponding dynamical model. In conclusion we will consider some engineering applications.

 

Tim Gentner: Learning-Dependent Modification Of Auditory Responses Across Forebrain
Networks (11/17/2011)

Sponsor: Institute for Neural Computation

Affiliation:
UCSD Dept. of Psychology, and Neurosciences Graduate Program
http://gentnerlab.ucsd.edu/


Time: 1230 -1330

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html


Title/Abstract:

"Learning-Dependent Modification Of Auditory Responses Across Forebrain Networks"

Sensory systems are preferentially biased to process natural signals that are most likely to carry relevant information. These biases are achieved through the hierarchical representation of increasingly high-dimensional stimulus features, and the learning-dependent association of specific features with specific behavioral goals. Surprisingly little is known about these processes at either the circuit or cellular level in the auditory system. I will discuss the coding of natural vocalizations across multiple auditory forebrain regions in a species of songbird, European starlings. I will propose a canonical cortical circuit, modified by learning, that combine behaviorally relevant and irrelevant signals to produce behaviorally informative representations in single neurons. At higher levels in the auditory system, acoustic features of natural signals that inform learned behavioral goals are coded with increased fidelity in the population correlation structure.

 

Samat Moldakarimov: Feedback Model Of Visual Perceptual Learning (11/03/2011)

Sponsor: Institute for Neural Computation

Affiliation:
Computational Neurobiology Laboratory
Salk Institute for Biological Studies
http://cnl.salk.edu


Time: 1230 -1330

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html


Title/Abstract:

"Feedback Model Of Visual Perceptual Learning"

Perception of visual stimuli improves with practice. Specificity of the improvements for stimulus features suggested an early cortical site of neural adjustments, where receptive fields are small. However, neural changes in the primary visual cortex (V1) that may underlie visual perceptual learning are still unclear. Unlike perceptual learning in other sensory modalities, stimulus preferences of V1 neurons did not alter due to perceptual learning: V1 neurons responded preferably to the same stimuli as before learning. Reports on size changes of receptive fields in V1 neurons were also controversial: One study reported that the receptive fields of V1 neurons did not alter due to learning but another study found smaller receptive fields after learning.

Previously suggested models of visual perceptual learning based on plasticity in recurrent connections among V1 neurons failed to explain the observed stability of stimulus preference in V1 neurons, and also could not resolve contradictions between two studies. Here we present a model of visual perceptual learning, in which interaction between V1 and higher cortical areas is a critical feature. We show in the model that learning results in changes in V1 neurons due to stronger feedback inputs from higher cortical areas. Perceptual learning in our model occurs without altering stimulus preferences of V1 neurons, as was observed in experiments. The model also resolves controversies observed in visual perceptual learning experiments and makes testable predictions.

 

 

Joe Snider: EEG In An Immersive Virtual Environment With Free Movement: Object
Recognition And Theta Auto-Correlation (10/20/2011)

Sponsor: Institute for Neural Computation

Affiliation: Poizner Laboratory, Temporal Dynamics of Learning Center, Institute for Neural Computation

Time: 1230 -1330

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html


Title/Abstract:

"EEG In An Immersive Virtual Environment With Free Movement: Object Recognition And Theta Auto-Correlation"

People navigate novel, complex environments on a daily basis, and they are able to quickly and efficiently form representations that allow for accurate navigation and interaction. In this study we are particularly interested in the full behavior of subjects when navigating a novel environment. To perform the experiment, the subjects donned virtual reality gear, a headset, inertial orientation monitor, and real-time optical tracking, and a 64 channel EEG cap. They entered a virtual room containing a rich set of objects, which matched the dimensions of the real room (~15'x20') in which they were freely moving about. The experiment was done in two sessions over two consecutive days. On day one after entering the virtual environment, subjects first freely explored the environment unsupervised for 10 minutes with no instruction except "explore the environment." Then, for five subsequent trials, opaque bubbles were placed around 39 objects in the room and the subjects walked up to one bubble at a time (indicated by turning green) and popped it by touching it with their hand to see the object hidden underneath. In part as a cover task, they indicated on a variable slider the "interest" they had in the object. On day two, there was no free exploration, but the subjects were presented with bubbles to pop, and after popping each bubble and seeing the object, the subjects indicated how certain they were the object was the same one that had been there on day one by adjusting a slider. Of the 39 total objects a random 13 were changed by rearranging their positions.

Behaviorally, subjects correctly identified from 70%-96% of the object changes. Strikingly, during the walking itself, we observed correlations of the theta wave recorded over midline frontal, central and parietal areas with the allocentric position of the subject in the room. These eeg signals may represent a high level combination of hippocampal navigation-related cells with parietal cortex-related signals. These navigation related correlations from the first day were then correlated with the behavior on the second day: stronger spatial correlations on day one corresponded to better memory performance on day two.

 

Host: Howard Poizner

Christopher Rozell: Sparse Coding Networks And Compressed Sensing In Neural Systems (10/06/2011)

Sponsor: Institute for Neural Computation

Affiliation: Georgia Institute of Technology
http://users.ece.gatech.edu/~crozell/

Time: 1230 -1330

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html


Title/Abstract:

"Sparse Coding Networks And Compressed Sensing In Neural Systems"

Many recent results in the signal processing community have shown that signal models based on low-dimensional geometric structure such as sparsity (or manifolds) can be very powerful for many applications. For example, it is clear now that a whole host of inverse problems can be solved more effectively by taking advantage of this structure, with the recent example of compressed sensing (i.e., recovering signals from highly undersampled incoherent measurements) gaining significant attention. Interestingly, neural coding hypotheses based on these same sparse signal models have demonstrated an ability to account for observations such as receptive field properties in sensory systems. In this talk I will discuss our previous work on implementing sparse coding models in biophysically plausible architectures. We will show that beyond simply accounting for receptive field structure, these networks can account for observed response properties of V1 cells. Specifically, I will highlight our recent results showing that these models can account for a wide variety of non-classical receptive field effects reported in V1. I will also highlight our preliminary results and ongoing work drawing connections between neural computation and the results of compressed sensing. In particular, we will briefly discuss our contributions to the compressed sensing literature that can be used in conjunction with sparse coding networks to model two distinct systems: communication bottlenecks in sensory pathways (e.g., the optic nerve) and recurrent networks for high-capacity sequence memory.

 

Biography: Dr. Christopher Rozell is an Assistant Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. Dr. Rozell received a B.S.E. in Computer Engineering and a B.F.A. in Performing Arts Technology (Music Technology) in 2000 from the University of Michigan. He attended graduate school at Rice University where he was a Texas Instruments Distinguished Graduate Fellow, receiving the M.S. and Ph.D. in Electrical Engineering in 2002 and 2007, respectively. He spent the summer of 2002 as a researcher at MIT Lincoln Laboratory, and following graduate school was a postdoctoral research fellow in the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley. Dr. Rozell joined the Georgia Tech faculty in July 2008, where he is affiliated with the Laboratory for Neuroengineering and the Center for Signal and Image Processing. His current research interests include constrained sensing systems, sparse representations, statistical signal processing, and computational neuroscience.


Host: Todd Coleman, tpcoleman@ucsd.edu

 

Summer 2011

Todd Coleman: A Team Decision Theory Approach To The Design Of Brain-Machine Interfaces (09/22/2011)

Sponsor: Institute for Neural Computation

Affiliation: Department of Bioengineering, UCSD
http://coleman.ucsd.edu/

Time: 1230 -1330

Location:
San Diego Supercomputer Center, East Annex
South Wing, Level B1, EB-129
http://inc.ucsd.edu/contactus.html


Title/Abstract:

"A Team Decision Theory Approach To The Design Of Brain-Machine Interfaces"

In this presentation, we espouse an interpretation of brain-machine interfaces as two agents cooperating to achieve a common goal: a bi-directionally noisy coupling between the user and the external device. With this viewpoint, we address three key questions that are of crucial importance to elicit superior performance:

- what feedback should be delivered to the user;
- how the user should react to the feedback and its intended objective to imagine the subsequent desired control command;
- how the external device should sequentially map its recorded neural signals to a control action.

We discuss designing the protocol of interaction between the human and the external device from the lens of team decision theory, decentralized> control theory, and feedback information theory. As an exemplar application, we consider three brain machine interfaces. We formulate, solve, and implement team decision problems pertaining to (i) neural control of a robotic arm (ii) exact neural specification of a smooth path in two dimensions, and (iii) transfer of expertise in game strategy from a brain to an artificial intelligence algorithm without the subject volitionally performing - or imagining - motor outputs. Throughout the talk, we emphasize the need for not only a solid theoretical foundation but also a solution that has form-factor properties that allow it to be easily implemented by a human. Lastly, we remark on how this viewpoint is applicable to general human-machine interface systems and to more general networks beyond simply one human and one computer.

 

Biography: Todd P. Coleman received the B.S. degrees in electrical engineering (summa cum laude), as well as computer engineering (summa cum laude) from the University of Michigan, Ann Arbor, in 2000, along with the M.S. and Ph.D. degrees in electrical engineering from the Massachusetts Institute of Technology (MIT), Cambridge, in 2002, and 2005. During the 2005-2006 academic year, he was a postdoctoral scholar at MIT and Massachusetts General Hospital in computational neuroscience. From July 2006 - June 2011, he was an Assistant Professor in the ECE Department and Neuroscience Program at UIUC. As of July 1, 2011, he has been an Associate Professor of Bioengineering in the Jacobs School of Engineering and affiliated with the Institute for Neural Computation at UCSD.

His core research interests include applied probability (with in information theory, control theory, and statistics) as well as neuroscience. He is applying these applications to understanding causal influences in (neuronal/communication/social) networks, designing brain-machine interfaces from a team decision theory viewpoint, and designing novel non-invasive and invasive flexible electronics systems to probe and interrogate brain function.

In Fall 2008, he was a co-recipient of the University of Illinois College of Engineering's Grainger Award in Emerging Technologies for development of a novel, practical timing-based technology. Beginning Fall 2009, Coleman has served as a co-Principal Investigator on a 5-year NSF IGERT interdisciplinary training grant for graduate students, titled "Neuro-engineering: A Unified Educational Program for Systems Engineering and Neuroscience". Coleman also has been serving on the DARPA ISAT study group for a 3-year term, beginning Fall 2009.


Tony Bell: Even closer towards a theory of learning and levels, and why we need such a theory (08/05/2011)

Sponsor: Institute for Neural Computation

Affiliation: Redwood Center for Theoretical Neuroscience, UC Berkeley Temporal Dynamics of Learnng Center, UC San Diego

Time: 1230 -1330

Location:
Swartz Center for Computational Neuroscience
SDSC East Building, EB185, UCSD


Title/Abstract:

"Even closer towards a theory of learning and levels, and why we need such a theory"

Host: Terry Sejnowski

 

Spring 2011

Jamie Lukos: On-Line Visuomotor Control In Parkinson's Disease (06/09/2011)

Sponsor: Institute for Neural Computation

Affiliation: Poizner Lab

Time: 1230 -1330

Location:
Swartz Center for Computational Neuroscience
SDSC East Building, EB185, UCSD


Title/Abstract:

"On-Line Visuomotor Control In Parkinson's Disease"

Posterior parietal cortices are known to be critical for online visuomotor control, but the role of basal ganglia-cortical loops is poorly understood. To investigate this issue, we are studying patients with Parkinson’s disease (PD), on and off dopaminergic therapy, while they reach for and grasp a virtual rectangular object whose orientation occasionally rapidly changes during the reach.  Our previous studies have led us to hypothesize that PD subjects will be most impaired in making corrective responses to the object perturbation when they cannot see their hand early in the movement, and that increasing tonic levels of dopamine will not reverse these impairments. Subjects grasped the virtual objects using two three-degree of freedom force-feedback robots (PHANToMs) that provided haptic interaction and feedback. Hand movements, eye movements and EEG were simultaneously recorded.  In 25% of trials, the object was rotated during the reach and subjects had to adjust the size of their hand opening (aperture) online (perturbed trials). Moreover, on half of the trials, visual feedback of the hand was blocked from movement onset to 2/3rds of the reach.  Preliminary results from 3 PD patients and 3 controls indicate that controls successfully adapted their grasp much more often than either PD off-meds or PD on-meds (71.7 vs. 48.4 and 45.1% correct grasps, respectively). For successful grasps, Fig. 1A shows individual trajectories of the thumb and index finger for one control and one PD patient. In the unperturbed trials, the control showed clear modulation of grip aperture over the course of the reach, but aperture modulation was nearly absent for the PD patient on or off medications. During the perturbed trials, the control subject generated a smooth correction throughout reach. In contrast, the PD patient generated a segmented corrective response, as if the adaptation was a separate event from hand transport. As expected, the control subject’s reach velocity was higher than that PD patient’s in all conditions. Fig. 1B indicates that blocking visual feedback of the hand greatly impaired the PD patient when off medication. The patient’s corrective response to the perturbed occurred more often after vision of the hand was restored. Patterns of eye-hand coordination are indicating that unlike controls, PD subjects look at their hands throughout the reach, thus operating in a mode of visual guidance rather than predictive control. These initial data are indicating that PD patients show marked motor control deficits in adapting to sudden environmental perturbations; that these deficits become particularly pronounced when PD patients cannot see their moving limb; and that dopamine repletion may remediate partial corrective response control to environmental perturbation when vision of the limb is removed. The association of the cortical EEG with these eye and hand dynamics is currently being analyzed.

 

 

 

Host: Gert Cauwenberghs

 

Ruey-Song Huang: Mapping multisensory representations of peripersonal space (05/19/2011)

Sponsor: Institute for Neural Computation

Affiliation:

Time: 1230 -1330

Location:
Swartz Center for Computational Neuroscience
SDSC East Building, EB185, UCSD


Title/Abstract:

"Mapping multisensory representations of peripersonal space"

This talk will present our recent progress in mapping multisensory representations of peripersonal space using fMRI, with topics covering both technical developments and scientific findings. Recently, we have developed wearable techniques for high-density and/or wide-range tactile stimulation in the MRI scanner. Sixty-four channels (expandable to 128) of computer-controlled air puffs can be delivered via plastic tubes/nozzles embedded in the air suit, including the face mask, turtleneck, gloves, and pants. The wearable techniques open the possibilities of presenting more complex tactile stimuli with programmable spatial-temporal patterns on the body surface, e.g. 2-D tactile display or tactile apparent motion. Multiple two-condition block-design scans revealed a high-level somatotopic homunculus consisting of the parietal face, lip, finger, and shoulder areas in the superior parietal lobe. Retinotopic mapping using phase-encoded design and wide-field visual stimuli (masked videos or looming objects) further revealed aligned visual-tactile maps in the same areas. Tactile mapping revealed a high-level homunculus consisting of the parietal face, lip, finger, and shoulder areas in superior parietal lobule. Visual mapping revealed an aligned visual homunculus in the same areas. A region of lower visual field representation in the post-central sulcus partially overlaps with the parietal finger area, which is anterior and lateral to the parietal face/lip areas. Another region of lower visual field representation, superior and medial to the parietal face area, partially overlaps with the parietal shoulder area. However, regions of upper visual field representation were restricted to the parietal face area. We suggest that aligned multisensory homunculi may play important roles in combining visual and tactile information to facilitate movements in the peripersonal space (e.g., eating involves hand-to-mouth coordination in the lower visual field).

Host: Gert Cauwenberghs

 

Yu Mike Chi: Wireless non-contact EEG (05/05/2011)

Sponsor: Institute for Neural Computation

Affiliation:

Time: 1230 -1330

Location:
Swartz Center for Computational Neuroscience
SDSC East Building, EB185, UCSD


Title/Abstract:

"Wireless non-contact EEG"


EEG technology has remained an indispensable tool for brian research as a result of its simplicity and low cost. Despite continual advancements towards decoding EEG signals for a multitude of applications, including brain-computer interfaces, medical diagnostics and consumer applications, widespread adoption of EEG technology has yet to occur. Conventional EEG sensors have always necessitated extensive preparation with wet electrodes and even scalp preparation, preceding it's use outside of laboratory conditions. In light of this limitation, dry electrodes, which do not require conductive gels, and non-contact electrodes, which can operate through hair, have been studied as an enabler towards practical, mobile EEG platforms. This talk will focus on a review of dry electrodes and the development of a new type of non-contact electrode. Previous attempts at building non-contact electrodes have been hampered by the limitations of the standard amplifiers available on the market. In this work, we have designed a fully custom integrated sensor front-end specifically to bypass many of the noise and accuracy problems encountered thus far.

http://www.isn.ucsd.edu/pubs/rbme10.pdf

 

Host: Gert Cauwenberghs

 

Marni Stewart Bartlett: Modeling natural facial behavior (04/07/2011)

Sponsor: Institute for Neural Computation

Affiliation:
Marian Stewart Bartlett
Computational Face Group
Machine Perception Lab
INC and Calit2, UCSD
http://mplab.ucsd.edu/~marni/


Time: 1230 -1330

Location:
Swartz Center for Computational Neuroscience
SDSC East Building, EB185, UCSD


Title/Abstract:

"Modeling natural facial behavior"

This talk reviews recent research in my lab modeling natural facial expression with automated systems. Automated systems enable new research into expression dynamics that was previously infeasible with manual coding, or which would have required application of electrodes to the face, which can influence facial behavior. The talk first describes projects on measurement of dynamic coupling of facial behavior to measure spontaneous mimicry, as well as detection of deception. We show that facial mimicry correlates with the ability to detect when a person is lying. This had long been hypothesized by embodied theories of cognition but never previously shown. These findings were made possible by the use of novel computer vision techniques that allowed us to obtain rich quantitative information about facial dynamics. The talk next describes development of interventions for children with autism. The interventions employ computer vision systems to train facial expression production, provide practice in facial mimicry, and immediate feedback on the child's facial expressions. Finally, if time permits, I will review our work on children's facial behavior during problem solving. Clustering techniques are employed to demonstrate differences in expression dynamics between older and younger children during problem solving.

 

Host: Gert Cauwenberghs

 

 

Winter 2011

Tom Liu: Multimodal Imaging Of Resting-State Functional Connectivity (03/24/11)

Sponsor: Institute for Neural Computation

Affiliation:
Center for Functional MRI
Department of Radiology, UCSD
http://fmri.ucsd.edu/people/ttliu.html

Time: 1230-1330

Location:
Swartz Center for Computational Neuroscience
SDSC East Building, EB185, UCSD


Title/Abstract:

"Determining Functional Connections Among Neurons Based Upon Their Activity Patterns"

In the absence of an explicit task, the "resting" brain exhibits large spontaneous fluctuations that exhibit coherence within multiple functional networks. To date, our knowledge of these resting-state networks has come primarily from measurements of fluctuations in the blood oxygenation level dependent (BOLD) signal used in functional magnetic resonance imaging (fMRI). However, because the BOLD signal is a complex function of both neural and vascular factors, the interpretation of changes in resting-state BOLD connectivity is not always straightforward. For example, we have found that caffeine significantly reduces resting-state BOLD connectivity in multiple networks, but it is not clear whether this reduction reflects a true decrease in neural connectivity versus a secondary effect of the caffeine-related decrease in cerebral blood flow. To resolve this question, we are using simultaneously acquired EEG and fMRI measures, as well as a linked set of MEG measures, to determine the extent to which fMRI measures reflect underlying changes in neuroelectric connectivity. In this talk, I will describe approaches for assessing connectivity using the various modalities and present preliminary results comparing the multimodal measures.

 

Host: Gert Cauwenberghs

 

Bill Kristan: Determining Functional Connections Among Neurons Based Upon Their Activity Patterns (03/10/11)

Sponsor: Institute for Neural Computation

Affiliation: Section of Neurobiology Division of Biological Sciences, UCSD
http://www.biology.ucsd.edu/labs/kristan/

Time: 1230-1330

Location:
Swartz Center for Computational Neuroscience
SDSC East Building, EB185, UCSD


Title/Abstract:

"Determining Functional Connections Among Neurons Based Upon Their Activity Patterns"

My major research interest is finding neuronal circuits that underlie behavior, using the nervous system of the medicinal leech. We use electrophysiological recordings and voltage-sensitive dye imaging to determine which leech neurons are active during the several leech behaviors: swimming, crawling, shortening, and local bending. We are now using these recordings to identify all the neurons and to predict the connectivity among them. We use a variety of correlation techniques to predict connections between pairs of neurons intracellular recordings to test our predictions. These techniques should be useful for all kinds of multi-unit recordings, including calcium imaging and multi-unit arrays.

 

Host: Gert Cauwenberghs

 

Marius Buibas: Engineering advances in mapping functional connectivity in cellular networks (02/24/2011)

Sponsor: Institute for Neural Computation

Affiliation: Silva Laboratory, Department of Bioengineering, UCSD

Time: 1230 -1330

Location:
Swartz Center for Computational Neuroscience
SDSC East Building, EB185, UCSD

 


Title/Abstract:

"Engineering advances in mapping functional connectivity in cellular networks"

We have developed a theoretical framework for estimating the causal functional connectivity in neuronal cellular networks from experimental data, that employ both parametric and non-parametric approaches, and are implemented on parallel graphics processor units (GPU). This talk will discuss the theoretical methods, experimental requirements, and performance of using this framework. Additionally, I will present control-theoretic tools to measure network-level stability, observability, controllability, with implications to understanding disease and the actions of remedies on network dynamics. Finally, I will discuss the problem of uniqueness of or degeneracy in functional connectivity estimates, with implications on the interpretability of experimental data.

 

Host: Gert Cauwenberghs

 

Muhammad Tahir Akhtar: Active Noise Control and Biomedical Signal Processing (02/03/11)

Sponsor: Institute for Neural Computation

Affiliation: The University of Electro-Communications, Chofu-shi, Tokyo, Japan
Sabbatical Visiting Scholar, INC

Time: 1230-1330

Location:
Swartz Center for Computational Neuroscience
SDSC East Building, EB185, UCSD


Title/Abstract:

"Active Noise Control and Biomedical Signal Processing"

Abstract: Dr. Akhtar will present an overview of recent research in adaptive filtering for single-channel and multi-channel active noise control (ANC), and extensions to biomedical signal processing. We consider the following problems in ANC: 1) effect of measurement noise in single-channel ANC systems, 2) online secondary path modeling, 3) online acoustic feedback path modeling and neutralization, 4) ANC for impulse-like noise sources, and 5) effect of uncorrelated disturbance at the error microphone. The talk will focus on our recent results mitigating uncorrelated disturbance in ANC systems.

He will also present recent results extending these signal processing techniques to electroencephalography (EEG), mainly for artifact removal using independent component analysis (ICA) and blind source separation (BSS). Our focus is on ICA and wavelet based approaches for de-noising EEG signals, and on maintaining continuity in BSS for long EEG recordings. Our current research at INC is directed to further extending the effectiveness and efficiency of these algorithms for EEG and biomedical signal processing.

Biography: Muhammad Tahir Akhtar received the B.S. degree in electrical engineering from the University of Engineering and Technology Taxila, Pakistan, in 1997, the M.S. degree in systems engineering from Quaid-i-Azam University, Islamabad, Pakistan, in 1999, and the Ph.D. in electronic engineering from Tohoku University, Sendai, Japan, in 2004. From 2004 to 2005, he was a COE postdoctoral fellow at the Department of Electronic Engineering, Tohoku University.
Currently he is working as an Assistant Professor at the Center for Frontier Science and Engineering (CFSE), The University of Electro-Communications, Tokyo, Japan, and a Special Visiting Researcher at The Center for Research and Development of Educational Technology (CRADLE), Tokyo Institute of Technology, Tokyo, JAPAN, and a Sabbatical Visiting Scholar at INC. His research interests include active noise control, adaptive signal processing, blind source separation and biomedical signal processing. Dr. Akhtar won Best Student Paper Award at the IEEE 2004 Midwest Symposium on Circuits and Systems, Hiroshima, Japan.

 

Host: Gert Cauwenberghs

 

 

Spring 2010

Carol Lynne Krumhansl: the musical brain (06/17/2010)

Sponsor: Institute for Neural Computation

Affiliation: Cornell University

Time: 1100 -1200


Title/Abstract:

"The Musical Brain: What's There?"

The talk presents research showing that the musical brain contains information from the very abstract to the very concrete. An empirical test of a recent music-theoretic proposal concerning musical tension demonstrates that the cognitive representation of musical structure includes hierarchical trees similar to those proposed for language and that deeply theorized properties of music link to cognitive processes. At the other extreme, studies on music recognition suggest a great deal of surface information is encoded in memory. Very short excerpts of popular music can be identified with artist, title, and release date. Even when an excerpt is not identified, emotion and style judgments are consistent. This suggests that musical memory is extremely detailed and has an extraordinarily large capacity and also contains schematic information for identifying emotional content and style.

Bio: Carol Lynne received a B. A. and M. A. in mathematics from Wellesley College and Brown University, respective. In 1978, she received a Ph. D. in mathematical psychology from Stanford University, primarily under the supervision of Roger Shepard. Since 1980, she has been on the faculty of Cornell University where her research has focused on music cognition. The major strand of her research is the cognition of tonality, the primary organizing principle of Western music. She is author of Cognitive Foundations of Musical Pitch. Other research has included studies of musical rhythm and timbre, dance, musical performance, emotion, contemporary proposals in music theory, and the neuroscience of music. She is on sabbatical leave in San Diego for the academic year 2010 - 2011.

 

Host: Howard Poizner

 

Ralph J. Greenspan: from sleep to consciousness in Drosophila (06/10/2010)

Sponsor: Institute for Neural Computation

Affiliation: Kavli Institute for Brain and Mind, UCSD

Time: 1100 -1200


Title/Abstract:

"From Sleep To Consciousness In Drosophila: The Sublime To The Ridiculuous"

The cognitive potential of the fruit fly Drosophila melanogaster has been extensively probed in recent years and, as a result, our estimation of its sophistication has grown considerably. How do they do it? Do these invertebrates accomplish such feats by an altogether different mechanism than we do? Our research addresses these questions from the standpoint of probing brain states in the fruit fly from the deepest sleep to the highest state of alertness, using a combination of genetic, physiological, and behavioral approaches.

At the molecular level, the fruit fly shares many features of sleep regulation with mammals, of which the dopaminergic and EGFR signal transduction systems are prominent. In the realm of higher arousal, the fruit fly displays many of the key elements of attention: orientation, expectancy, stimulus discrimination and suppression, and sustainability. Finally, they share a critical physiological feature with attention and consciousness states in humans: an increased degree of coherence (phase-locking) among multiple brain regions during the attention-related task.

While it is not productive to spend too much time worrying about whether fruit flies are conscious, they may possess some of the same requisite, underlying mechanisms, and thus are worthy of further study in this direction.

 

Host:

 

Darren Schreiber: this is your brain on politics (05/27/2010)

Sponsor: Institute for Neural Computation

Affiliation: Political Science, UC San Diego, UCSD

Time: 1100 -1200


Title/Abstract:

"This Is Your Brain On Politics"

In political science, we have long had low levels of explanatory power with conventional models. Accounting for just a quarter of the variance is usually a tremendous accomplishment and often requires many independent variables and sophisticated statistical techniques. Two dogmas of the discipline, the behaviorist approach and rational choice theory, preclude biological explanations. In this talk, however, I will review a variety of results that show how some of the central phenomena of interest in the field can be accounted for using work based in genetics and neuroscience. I'll discuss work on race, political sophistication, voter turnout, and partisanship. And, I will show how we can use fMRI to predict your political party affiliation with shocking accuracy and evidence of the biological basis of egalitarianism.

 

Host:

 

Steve Furber: building brains (05/13/2010)

Sponsor: Qualcomm and Brain Corporation

Affiliation: Computer Science Department, University of Manchester

Time: 1100 -1200

Location:

Irwin M. Jacobs Qualcomm Hall
5775 Morehouse Drive
San Diego, CA 92121


Title/Abstract:

"Building Brains"

Computer Technology has advanced spectacularly since the first program was executed by the Manchester 'Baby' machine on June 21 1948, but if this progress is to be sustained there are major challenges ahead in the area of transistor predictability and reliability and in the exploitation of massively-parallel computing resources. Biology has solved both of these problems, but we don't understand how those solutions function at the level of information processing. Two questions arise from this line of thinking:

* Can massively-parallel computers be used to accelerate our understanding of brain function?
* Can our growing understanding of brain function point the way to more efficient, fault-tolerant computation?

While these questions remain so far unanswered, they suggest a line of investigation that has been recognized under the Grand Challenge of 'Building Brains'.

Bio: Dr. Furber received his B.A. degree in Mathematics in 1974 and his Ph.D. in Aerodynamics in 1980 from the University of Cambridge, England. From 1980 to 1990 he worked in the hardware development group within the R&D department at Acorn Computers Ltd, and was a principal designer of the BBC Microcomputer and the ARM 32-bit RISC microprocessor, both of which earned Acorn Computers a Queen's Award for Technology. Upon moving to the University of Manchester in 1990 he established the Amulet research group which has interests in asynchronous logic design, power-efficient computing, and neural systems engineering where the major activity is the SpiNNaker project. This project's focus is on building a massively-parallel chip multiprocessor system for modeling large systems of spiking neurons in real time. The ultimate goal is to build a machine that incorporates a million ARM processors linked together by a communications system that can achieve the very high levels of connectivity observed in biological neural systems. Such a machine would be capable of modeling a billion neurons in real time (which is still only around 1% of the human brain).

 

Host: Qualcomm CTO Dr. Roberto Padovani

 

Joe Snider, Dongpyo Lee, Deborah Harrington, Howard Poizner: virtual grasping in Alzheimer's disease (04/29/2010)

Sponsor: Institute for Neural Computation

Affiliation: Institute for Neural Computation, UCSD

Time: 1100 -1200


Title/Abstract:

"Virtual Grasping In Parkinson's Disease"

We will present data from an ongoing study into the nature of the neural and behavioral deficits of patients with Parkinsons disease (PD). We have hypothesized that PD motor deficits are of two distinct types, one due to loss of gain resulting in small and slow movements, and the other due to loss of precise, differentiated basal ganglia function resulting in poorly coordinated movement. We further hypothesized that dopamine replacement therapy may remediate the former but not the latter type of deficit. We tested this hypothesis using a novel paradigm in which subjects used two haptic robotic devices to reach to and grasp virtual objects. The objects had different dynamic properties and spatial orientations relative to gravity. 21 PD patients, on and off dopamine medication, and 24 age-matched controls have been tested. PD patients off medication showed significantly reduced peak velocities during the reach. In addition, they inappropriately timed and coordinated the opening of their fingers during the reach with the transport and changes in orientation of their arm. After touching the object, subjects have to switch their action from translating the hand to lifting the object, and that switch was significantly delayed in PD patients. During the lift, PD patients were unable to maintain the specified lift trajectory, a task requiring coordination of the entire hand-arm system. Dopamine replacement therapy significantly increased patients peak reach velocities and the squeeze forces used, but minimally ameliorated their coordination deficits. Thus, repletion of dopamine in the degenerated basal ganglia is not sufficient to restore patterns of neuronal firing required to support coordinated sensorimotor processing.

In a second phase of the study, these same subjects on and off dopamine medication performed a finger sequencing task during fMRI. In collaboration with Deborah Harringtons group, we will be correlating disease-related patterns of brain activity with the behavioral deficits shown in the task described above.

 

Host:

 

Thorsten Zander: brain-computer interaction (04/15/2010)

Sponsor: Institute for Neural Computation

Affiliation: Technische Universitaet Berlin, and INC, SCCN

Time: 1100 -1200


Title/Abstract:

"Perspectives For BCI Technology In The Fields Of Human-Machine Systems And Neuroscience"

The introduction of modern methods from machine learning to the field of brain machine interfaces (BCIs) has reduced the typically high level of effort required to use a BCI based system, thereby increasing its range of usability, efficiency, and joy of use. I will present our work on the first hybrid BCI combining gaze control with BCI, and the first passive BCI overriding the necessity for involving focused volitional control incorporated into a game-based human-machine system. The results show that BCI based technology is capable of detecting covert aspects of user state, i.e., aspects not detectable from external measures of the behavior of the user for the optimization of human-machine systems. In particular, our work on passive BCI with SCCN investigated a covert aspect of user state by detecting bluffing in a game context. These results and their impact on cognitive neuroscience research and human-machine interactive systems demonstrate that BCI technology can be used beneficially beyond applications for neural prostheses, inspiring to a broadening of the initially restricted definition and purposes of BCI.

Baernreuther B., Zander, Reissland, Kothe, Jatzev, Gaertner, Makeig S.: Access to covert aspects of user intentions: Detecting bluffing in a game context with a passive BCI. Fourth International BCI Meeting, Carmel, CA, June 2010.

Pfurtscheller, Allison, Bauernfeind, Brunner, Solis-Escalantes, Scherer Zander: The Hybrid BCI. Frontiers in Neuroprosthetics, 2010.

Zander T.O., Gaertner M., Kothe C., Vilimek R.: Combining Eye Gaze Input with a Brain-Computer Interface for Touchless Human-Computer Interaction. International Journal of Human-Computer Interaction, in press.

Zander T.O., Kothe C., Jatzev S., Gaertner M.: Enhancing Human-Computer Interaction with input from active and passive Brain-Computer Interfaces. In Tan, Nijholt (Eds.): Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction, in press.

 

Host:

 

Gabriel Silva: ERG and electrophysiology of the retina (04/01/2010)

Sponsor: Institute for Neural Computation

Affiliation: Jacobs Faculty Fellows Professor of Bioengineering, Departments of Bioengineering and Ophthalmology, UCSD

Time: 1100 -1200


Title/Abstract:

"The Electroretinogram And Electrophysiology Of The Retina: Theory And Practice"

Electroretinography (ERG) is a non-invasive method that allows measuring the global electrophysiological response of the neural sensory retina. It can be used both for studying neurophysiology and for characterizing and diagnosing diseases associated with neural retinal dysfunctions. Depending on the specific method used, the ERG can provide information on different cell types in the retina as population averages or more restricted geometric localizations. This talk will introduce some of the methods involved, focusing on neurobiological and engineering considerations, and will discuss the use of the ERG to computationally isolate the full time course of the pure photoreceptor neuron population response from the full field ERG.

 

Host:

 

 

Winter 2010

Yijun Wang, Yu-Te Wang and Tzyy-Ping Jung: wireless EEG BCI (03/25/2010)

Sponsor: Institute for Neural Computation

Affiliation: Swartz Center for Computational Neuroscience, INC, UCSD

Time: 1100 -1200


Title/Abstract:

"A Mobile, Wireless And Online EEG Brain-Computer Interface"

Transitioning brain-computer interfaces (BCI) from laboratory demonstration to real-life applications poses severe challenges to the BCI community [1][2]. With advances in biomedical sciences and electronic technologies, the development of mobile and online BCI has received increasing attention in the past decade. To implement a mobile BCI with online processing, a mobile terminal such as a mobile phone or a PDA presents an ideal platform for data transmission, signal processing, and feedback presentation. In this chalk talk we present an online BCI based on a mobile and wireless EEG acquisition module and a cell phone, and discuss implications of this BCI platform technology as an enabling technology for interactive cognitive neuroscience and clinical applications in neuroengineering.

[1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, "Brain-computer interfaces for communication and control," Clin. Neurophysiol., vol. 113, no. 2, pp. 767-791, 2002.

[2] Y. Wang, X. Gao, B. Hong, and S. Gao, "Practical designs of brain-computer interfaces based on the modulation of EEG rhythms", in B. Graimann, G. Pfurtscheller (Eds.) Invasive and Non-Invasive Brain-Computer Interfaces, Springer, The Frontiers Collection, 2009.

 

Host:

 

Gert Cauwenberghs: Towards Neocortical Vision In Silicon (02/18/2010)

Sponsor: Institute for Neural Computation

Affiliation: Institute for Neural Computation, UCSD

Time: 1230 -1330


Title/Abstract:

"Towards Neocortical Vision In Silicon"

We are embarking on an exciting journey in our continued and renewed efforts, with the DARPA Neovision2 program, towards reverse engineering the visual system in silicon. I will share the visions and plans of our team that spans the two coasts and the spectrum between neuroscience and neuroengineering. I will also briefly present a scalable approach to realizing locally dense and globally sparse connectivity in large-scale reconfigurable neuromorphic systems, towards a real-time and low-power silicon model of neocortical vision with over a million neurons and a billion synapses.

 

Host:

 

Terry Sejnowski: motor cortex dynamics (02/04/2010)

Sponsor: Institute for Neural Computation

Affiliation: Institute for Neural Computation, UCSD

Time: 1100 -1200


Title/Abstract:

"Motor Cortex Computes Dynamics In Spatial Reference Frames"

Although many neurons in the primary motor cortex (M1) project directly to the spinal cord, how they control movements is not yet understood. Some M1 neurons represent intrinsic dynamical variables such as muscle tensions, whereas other neurons code for extrinsic kinematic variables such as movement trajectories. Hiro Tanaka and I have reconciled these observations by showing that the equations of motion governing reaching simplify in spatial coordinates. The performance of human-machine interfaces might be improved by computing joint torques from neural activity in M1 using a spatial reference frame.

 

Host:

 

Resources:


MB

Faculty Spotlight

Tzyy-Ping Jung
Elevated to IEEE Fellow for contributions to blind source separation for biomedical applications.

...more info


Staff Spotlight

Lily Marapao, Human Resource Manager, is retiring from UCSD after 31 years

...see more