Chalk Talks
The INC chalk talk series meets as a forum for interactive exchange on all aspects of neural computation. The purpose of these meetings is to foster the collaborative interactions between INC members and with colleagues across campus, and to stimulate new ideas and research initiatives. Each meeting features one of the core or affiliated INC faculty labs/groups, with informal presentation of late-breaking research and new research directions. The meetings are open to the community, and we encourage broad participation across campus. For further information, or to schedule a presentation, contact us.
When: Fall through SpringTime: Usually 12:30 p.m. – 1:30 p.m. but may change
Location: San Diego Supercomputer Center, East Expansion; South Wing, Level B1, E129E; http://inc.ucsd.edu/contactus.html
Sponsored by: Brain Corporation (http://www.braincorporation.com) and Qualcomm Corporation (http://www.qualcomm.com)
Sponsors
Coming Up
Understanding tinnitus – in the brain and in the community
Anusha Yasoda-Mohan
Trinity College Institute for Neuroscience and School of Psychology at Trinity College Dublin
When: November 21, 2024; 12:30 pm - 1:30 pm
Where: Hybrid
In-person Viewing with Tea After: San Diego Supercomputer Center, East Expansion; South Wing, Level B1, E129E
Over Zoom: https://ucsd.zoom.us/j/2888083696
Abstract:
Tinnitus is the perception of a continuous sound without an external sound source. About 8-15% of the population of UK and Ireland are impacted of which 2-5% are severely disabled. Tinnitus is not simply an auditory percept but a complex interaction between sensory, cognitive, behavioural and emotional components. In this talk, Dr. Anusha Yasoda-Mohan will walk us through the exploration of tinnitus from a network perspective and how different brain regions interact to encode the chronic tinnitus percept. She will also talk about the reality of tinnitus in the community and her work in bringing this community together in Ireland.
Bio:
Dr. Anusha Yasoda-Mohan is a Global Atlantic Fellow for Equity in Brain Health and a Senior Postdoctoral Research Fellow at the Trinity College Institute for Neuroscience and School of Psychology at Trinity College Dublin. She is also a trained Indian classical and Bollywood dancer. Dr. Yasoda-Mohan in a multidisciplinary researcher with an undergraduate and masters’ degree in Biomedical Engineering, PhD in Communication Sciences and Disorders and Postdoc in Psychology. She primarily works with people with tinnitus (continuous tinging in the ear), investigating how brain networks communicate with one another to generate tinnitus using resting state and task-based EEG. Currently, she is working on expanding the cognitive aspects of tinnitus and investigating if there is a relationship between tinnitus and cognitive decline. In addition to studying phantom auditory perception, she is the Director of the International Tinnitus Research Initiative Foundation’s dissertation and communication wing (TRI Academy) which strives to take tinnitus research and clinical practices to the wider tinnitus research community. She also leads a community for people living with tinnitus in Ireland called Tinnitus Éire through which she strives to bring a sense of community and belonging for tinnitus sufferers. Additionally, she is the co-developer of Brain For Movement (BrainFM) – an education and awareness workshop aimed at communicating complex neuroscience topics through dance to primary school age children. These tie together with her vision to leverage the arts as a medium to both comprehend and communicate the working of the brain.
Past Talks
2024-11-7
Representational drift: a peek under the hood of a continual learning machine
Timothy O’Leary
University of Cambridge
+ moreAbstract
During learning, populations of neurons alter their connectivity and activity patterns, enabling the brain to construct a model of the external world. Conventional wisdom holds that the durability of a such a model is reflected in the stability of neural responses and the stability of synaptic connections that form memory engrams. However, recent experimental findings have challenged this idea, revealing that neural population activity in circuits involved in sensory perception, motor planning and spatial memory continually change over time during familiar behavioural tasks, with no overt task-related learning or behavioural change. This continual change implies significant redundancy in neural representations and is possibly a hallmark of continual learning. I will discuss work from our group and others on the causes and consequences of drift. We find that redundancy in circuit connectivity can make a task easier to learn, or even compensate for deficiencies in biological learning rules. If neuronal connections are subject to an unavoidable level of turnover, the level of plasticity required to optimally maintain a memory is generally lower than the total change due to turnover itself, predicting continual reconfiguration of an engram. In empirical data, we find cortical responses in sensorimotor tasks admit a relatively stable readout at the population level despite large changes in neural activity. We also find that drift is structured, and that its statistics may be well suited to an error-correcting readout. Our general hunch is that drift is not simply a result of biology being unreliable, but an inevitable result of continual learning, and structured to preserve important memories.
Bio
Timothy O’Leary is Professor of Information Engineering and Neuroscience at the University of Cambridge. Originally trained as a pure mathematician, Timothy dropped out of a PhD on hyperbolic geometry to study the brain. After retraining as an experimental physiologist, he obtained his doctorate from the University of Edinburgh in experimental and computational neuroscience, subsequently joining Eve Marder’s laboratory as a research fellow.
Timothy’s research lies at the intersection between physiology, computation and control engineering. His goal is to understand how nervous systems self-organise, adapt and fail, and connect these to diversity and variability in nervous system properties. He has worked as both an experimentalist and theoretician, on systems that span the scale from single ion channel dynamics to whole brain and behavior, and across invertebrate and vertebrate species. His group works closely with experimentalists to study neuromodulation, neural dynamics, and how sensorimotor information is represented in the brain, more recently focussing on how neural representations evolve over time. He approaches these problems from an unusual perspective, citing engineering principles as being key to understanding the brain - and biology more widely.
2024-1-25
Uncovering continual learning mechanisms from olfactory circuit in Drosophila
Yang Shen
Simons Center for Quantitative Biology in Cold Spring Harbor Laboratory
+ moreAbstract:
A key feature of intelligence is the ability to continuously adapt to environmental changes and acquire new skills while preserving previously acquired knowledge. This adaptive ability, however, remains a challenge for current artificial intelligence (AI) algorithms. For example, changing the training regime of artificial neural networks from a mixed to a sequential data input format markedly deteriorates their performance. This decrease of performance is attributed to a phenomenon known as “catastrophic forgetting”, where artificial neural networks “forget” previously learned information upon learning new content. In contrast, biological neural systems, exemplified by even simple organisms like fruit flies, exhibit robust adaptability and sustained learning abilities. These systems efficiently retain and build upon existing memories while integrating new information, an attribute critical for continual learning. Given the simplicity of the fruit fly’s neural circuitry, which is well-studied through extensive research, and their ability to perform complex tasks, they present an ideal model for exploring neural information processing mechanisms. In this talk, I will delve into the intrinsic features of the olfactory circuit in the fly and discuss how insights from the study of fruit fly neurology can inform and enhance the development of more adaptable AI systems.
Bio:
Yang got her PhD in Chemical Physics from the University of Maryland, College Park. She is now a postdoctoral fellow at the Simons Center for Quantitative Biology in Cold Spring Harbor Laboratory. She was a recipient of the Swartz Foundation Postdoctoral Fellowship. Her research aims to understand how features of neural circuits, such as circuit architecture and synaptic plasticity rules, enable the brain to adapt and learn continuously and translate the biological insights into effective machine learning algorithms with better performance in continual learning, paving the way for more adaptable, efficient, and robust artificial agents.
2023-12-7
Phase-encoded fMRI tracks down brainstorms of natural language processing with sub-second precision
Ruey-Song Huang
University of Macau
+ moreNatural language processing unfolds information overtime as spatially separated, multimodal, and interconnected neural processes. Existing noninvasive subtraction-based neuroimaging techniques cannot simultaneously achieve the spatial and temporal resolutions required to visualize ongoing information flows across the whole brain. In particular, it has been long held that the low sampling rate of functional magnetic resonance imaging (fMRI), loud scanner noise, and speaking-induced head motion make it nearly impossible to study overt language production with fMRI. We have developed rapid phase-encoded designs to fully exploit the temporal information latent in fMRI data, as well as overcoming scanner noise and head-motion challenges during continuous overt language tasks without sparse sampling. We captured real-time information flows as coherent hemodynamic (BOLD) waves traveling over the cortical surface during listening, reading-aloud, reciting, and oral cross-language interpreting tasks. We were able to observe the timing, location, direction, and surge of traveling waves in all language tasks, which were visualized as “brainstorms on brain weather maps.” The paths of hemodynamic traveling waves provide direct evidence for dual-stream models of visual and auditory systems as well as “logistics models” for crossmodal and cross-language processing. Specifically, we have tracked down the step-by-step processing of written or spoken sentences being first received by visual or auditory streams, carried across the language and domain-general cognitive regions, and finally delivered as overt speeches monitored through the auditory cortex. This approach gives a complete picture of information flows across the brain during natural language functioning and real-time cognitive processing.
Bio:
Ruey-Song Huang received his PhD in Cognitive Science from UCSD and then worked as a postdoc and assistant/associate project scientist at INC. He is currently an Assistant Professor at the Centre for Cognitive and Brain Sciences and Department of Electrical and Computer Engineering at the University of Macau. In 2020, he set up the first research-dedicated 3T MRI facility in Macau. He then started up two university-level initiatives: Addiction Project and Language Project (in brief). In addition to the main talk on language mapping, he will present his ongoing research on several topics including language and music processing, risky decision making, multisensory integration, and sensorimotor mapping. Lastly, he will introduce the University of Macau Brain Atlas (UMBA), a surface-based functional brain atlas with layered multimodal and multi-language maps.
2023-11-30
Using Brain Waves to Read Neural Circuits
Gautam Agarwal
Claremont McKenna College
CLICK HERE to view the recorded talk
+ moreAbstract
Action potentials isolated from single neurons (“spikes”) are thought to provide the most accurate readout of brain activity. In contrast, local field potentials (LFPs, or “brain waves”), which reflect pooled electrical activity of thousands of nearby neurons, are easier to measure than spikes but seem to offer only coarse-grained readouts of behavior. In this talk, I will describe how patterns in brain waves can be used to gain an astonishingly precise view of the contents of a neural circuit, using multi-electrode recordings from the hippocampus of rats navigating a maze. First, I will describe a simple algorithm that we have designed to identify informative spatial patterns in the LFP, allowing us to track the animal’s changing position within its environment. Next, I will describe the work of a student who used our algorithm to demonstrate that rather than being fixed, a rat’s internal map of space is constantly shifting in a coordinated manner. Together, these studies show how brain waves grant efficient and complete access to a neural code.
Bio:
Starting as an experimentalist studying olfactory neuropil of flies, Gautam turned to theory believing that it offered a unique language to express the staggering complexity of brains. Dr. Agarwal studied oscillations as a postdoc with Dr. Friedrich Sommer at the Redwood Center at UC Berkeley (2011-2014). Gautam then lived in Lisbon, where he worked with Dr. Zach Mainen to develop a behavioral task for deconstructing human intelligence. Returning to Berkeley right before the pandemic to continue his oscillatory studies, Gautam recently started as faculty at the Claremont Colleges, where he is thinking about ways of communicating theoretical neuroscience in the context of a liberal arts education.
2023-11-2
How do parallel visual pathways vie for control of visual behavior
Pamela Reinagel
UC San Diego
CLICK HERE to view the recorded talk
+ moreIn mammals, visual information from the retina flows to the brain in two main image-forming streams. The best-known and most-studied pathway flows through the thalamus (specifically the LGN) to primary visual cortex (V1). The other stream flows through the superior colliculus (SC), the homolog of the tectum of cold-blooded vertebrates. Considering that the tectal pathway is sufficient for all visually guided behavior in vertebrates that lack cortex, it should not be surprising that some visually guided tasks do not require V1 in mammals. In rodents, for example, V1 is not required for the escape response to looming stimuli, reporting the onset of a visual stimulus, or reporting the location of a visual stimulus. Other visual tasks require V1 in mammals, however. For example, orientation discrimination, image discrimination, and random-dot motion discrimination appear to be V1-dependent in rats. This poses a problem: what happens when visual stimuli present conflicting cues, such that SC would mandate one motor response while V1 directs the opposite? It turns out this has not been studied, because typical experimental designs drive only one or the other pathway. I will share our very preliminary results from a new behavioral task that creates such a conflict, and discuss our current hypothesis regarding the underlying neural mechanisms of the observed behavior.
Pamela Reinagel is an Associate Professor in the Neurobiology department at UCSD, where she started her lab in 2003. Using experimental (neurophysiology and behavior) and computational/theoretical approaches, her lab studies the neural coding of visual information, visually-guided behavior, and economic decision-making, primarily in rodents.
Lab website: www.ratrix.org
Relevant background for talk: Petruno, S, Clark, RE, and Reinagel, P (2013) Evidence that primary visual cortex is required for image orientation and motion discrimination by rats. PLoS One 8(2):e56543.
2023-10-12
Coupled spiking oscillators for computing
Erbin Qiu
UC San Diego
CLICK HERE to view the recorded talk
+ moreOscillator-based computing, which employs networks of interacting oscillators for information processing, presents an efficient alternative to the conventional computational algorithms executed on von Neumann architecture. We explore the dynamic behaviors of coupled spiking oscillators for computational purposes. The behaviors exhibited by coupled spiking oscillators are perplexing and defy the common traits seen in harmonic oscillators. When the capacitive coupling strength is increased, it results in stochastic disruptions of the alternating spiking sequence. Interestingly, we also exploit the idea of heat interaction among these oscillators for computating, replacing the conventional use of electricity. Our research focuses on studying the evolving synchronization patterns in the thermally coupled nano-oscillators. Additionally, we demonstrate a variety of reconfigurable electrical behaviors that imitate those found in biological neurons, all made possible through heat interaction. This research paves the way for the development of scalable and energy-efficient spiking neural networks, thereby advancing the realm of brain-inspired computing.
Erbin Qiu is from UCSD, a fifth-year PhD candidate working with Prof. Ivan Schuller on neuromorphic devices.
2023-8-10
Normative models of remapping in the hippocampal formation
Mikkel Lepperød
Simula Research Laboratory, University of Oslo, Norway, https://www.simula.no/people/mikkel
CLICK HERE to view the recorded talk
+ moreAbstract: When an animal encounters a novel environment, spatial cells, such as, grid cells, border cells, and place cells, remap their spatial patterns yet seem to maintain their core functions based on their activity. This compositional generalization, the ability to apply learned knowledge to novel combinations of known components, is believed to be an essential factor in flexible natural intelligence. However, the mechanism supporting such flexibility remains elusive, both in artificial intelligence (AI) and neuroscience. To shed light on the underlying principles of spatial cell remapping, we employ normative modeling, a theoretical approach that formulates optimal solutions to a given problem. This approach allows us to predict how these cells should ideally respond to environmental changes, providing a benchmark against which to compare actual neural activity. We propose a formal definition of cognitive maps and two neural network implementations that support remapping grid cells and place cells across familiar environments. However, these models still need to be improved in their architecture to learn novel environments in a continual learning setting. Our long-term goal is to inform the development of artificial intelligence systems that mimic the brain's ability to generalize from known to novel situations, a key challenge in the field.
Biography: Mikkel Elle Lepperød is a Research Scientist at the Simula Research Laboratory in Norway. He holds an MSc in Applied Mathematics and pursued a PhD in Neuroscience at CINPLA, University of Oslo. Following his PhD, Mikkel completed a postdoc focused on bio-inspired AI. Presently, he leads a group working at the intersection of neuroscience and AI.
2023-6-22
Stochastic coding: a conserved feature of odor representations and its implications for odor discrimination
Shyam Srinivasan
UC San Diego
CLICK HERE to view the recorded talk
+ moreIn this talk, I will discuss the role of neuronal noise in enabling the discrimination of similar stimuli. Our analysis of stimulus responses in the olfactory cortex and mushroom body of mice and flies has shown that while a few cells respond reliably (the same way in every trial), most cells respond stochastically. I will show evidence that these stochastic cells differ between similar stimuli and, when combined with learning mechanisms, reduce overlap between stimulus representations improving discrimination. Our findings may apply to other central circuits with similar architectures involved in learning and discrimination.
Bio:
Shyam received his PhD from the University of California, Irvine (UCI), where he used computational and molecular biology experiments to show how morphogen gradients specify regional boundaries in the developing brain. He then joined Chuck Stevens at the Salk Institute and KIBM, UC San Diego, where they used comparative anatomy with computation and theory to uncover design principles of visual, olfactory, and cerebellar circuits across species. He is currently working on examining conserved mechanisms of learning and discrimination in the brain.
2023-5-25
Contributions of posterior parietal cortex to coordinated movements
Eric Mooshagian UC San Diego
CLICK HERE to view the recorded talk
Understanding how the brain coordinates the movement of multiple body parts is of fundamental importance to systems neuroscience. Yet the cortical representation of coordinated movements is not well understood. Research implementing unimanual movements makes it difficult to study the neural circuits involved in coordination. Because our eyes often look to where we are about to reach, eye and arm movement patterns are highly stereotyped. However, primates also use their arms in complex ways that frequently require bimanual coordination. A powerful way to study these interactions is with bimanual movements. When reaching to two objects, which one do you look at first? We know that visually guided movements depend, in part, on the posterior parietal cortex (PPC). Spatial representations of saccade and reach goals preferentially activate cells in the lateral intraparietal area (LIP) and the parietal reach region (PRR) in PPC, respectively.
In this talk, I will first discuss how we use bimanual reaching to investigate eye-hand coordination. Then, I will describe the use of this approach to address the hypothesis that PPC plays a causal role in saccade selection during eye-arm coordination. Next, I will describe the coding of limb movements in PRR and the integration of reach signals between the hemispheres in service of bimanual coordination. Finally, I will describe my current work, aimed at understanding the decision to reach with the left or right arm, to gain a mechanistic understanding of motor decision-making in the brain.
Bio: Eric Mooshagian is a research scientist in the Cognitive Science at UCSD and a visiting scientist at the Center for Neurobiology of Vision at The Salk Institute.
2023-5-11
Hyperbolic geometry in natural stimuli and neural responses
Tatyana Sharpee Salk Institute
CLICK HERE to view the recorded talk
Abstract: In this presentation, I will describe both theoretical reasons and experimental evidence that a broad range of biological data exhibit a latent low-dimensional hyperbolic geometry. This includes general patterns of gene expression, metabolic volatiles that serve as inputs for the sense of smell, neural responses in the brain to human perceptual responses. We have arrived at these results using a combination of topological and metric analyses. In the case of neural data, I will present data on how neural representations expand with experience, and that the expansion matches the maximal limit on information that could be acquired in discrete sampling episodes.
Bio: Tatyana Sharpee received her PhD in condensed matter physics from Michigan State University studying under the supervision of Mark Dykman. After her PhD, she started to work in computational neuroscience at UCSF where she developed statistical methods for analyzing neural responses to natural stimuli, which exhibit strong correlations and non-Gaussian effects. These methods made it possible to reveal new adaptation processes in the brain by comparing neural responses to white noise and natural stimuli. Her independent research program has started at the Salk Institute for Biological Studies where she is currently a Professor in the Computational Neurobiology Laboratory. Her group analyzes principles of information transmission in the brain and within cells. Dr. Sharpee is a fellow of the American Physical Society.
2023-4-20
Monostable Multivibrators – a novel class of artificial spiking neuron
Lars Keuninckx imec
CLICK HERE to view the recorded talk
Abstract: An overemphasis on biological realism could become self-limiting for the field of neuromorphic engineering, since the underlying biological operating principles, even if understood well enough in detail, may simply not be transferable to the electronic hardware domain. After all, neurons are living cells first and only then computational units. Instead of trying to find ways to efficiently implement and network biologically-inspired neurons in electronic hardware, we reverse the question and ask which useful fundamental electronic building blocks are easy to implement and connect in large numbers and what are their computational properties. As a possible answer to this question, we present networks of monostable multivibrators (MMVs). These are simple timers that are straightforward to implement using counters in digital hardware. We show that large recurrent networks of MMVs subject to external stimuli exhibit a rich dynamical spiking activity that is governed by fundamentally different rules from those found in integrate-and-fire neuron networks. Since an MMV has only two digital inputs, excitatory and inhibitory, incoming spikes are simply logically OR-ed together. Thus, MMV networks do not require synaptic addition in the classical sense. We then explain and demonstrate how event-driven MMV networks can be modelled and trained using a modified backpropagation method, leading to extremely low power inference.
Bio: Lars Keuninckx received a Master’s in Industrial Engineering in Electronics and Telecommunications from Hogeschool Gent, Ghent, Belgium, in 1996. He worked in the industry for several years, designing electronics for the automotive, industrial, and medical fields, before continuing his studies at the Vrije Universiteit Brussel, Brussels, Belgium, where he obtained a B.Sc. in Physics in 2009 and a Ph.D. in engineering in 2016 at the Applied Physics group. He joined imec Leuven, Belgium in 2019, where his research focusses on low power neuromorphic sensor systems. His interests include the applications of complex dynamical systems and nonconventional approaches to computing.
2023-3-9
Neuromorphic Sensor Fusion with Applications to Drone Navigation
Ali Safa KU Leuven and imec, Belgium - https://www.researchgate.net/profile/Ali-Safa-5
CLICK HERE to view the recorded talk
Abstract: Over the past decades, neuromorphic engineering – which seeks to take close inspiration from the highly efficient inner workings of biological agents – has emerged as a promising path towards building resource- and power-efficient computational systems, operating as stand-alone agents at the extreme edge. At the same time, there has been a growing interest in building ubiquitous robotics systems, such as drones, by taking inspiration from nature. Neuromorphic engineering has therefore emerged as a well-suited paradigm for building autonomous agents, where compute resources (such as area and energy consumption) is typically limited. In addition to resource efficiency, neuromorphic learning techniques – such as spiking neural networks (SNNs) equipped with Hebbian plasticity – are expected to enable the design of robots that can jointly learn and act in real time, adapting to their changing environment, all with little area and energy consumption when running in neuromorphic chips. This is in contrast to deep learning techniques where neural network training is extremely compute-expensive and usually carried offline, using shuffled data sources not shown in their real-time order. Still, building bio-inspired computational agents that can adaptively learn and act using SNNs and local plasticity rules remains an open problem. In this talk, we will cover recent progresses made in the design of such SNN learning architectures at imec & KU Leuven (Belgium), with applications to sensor fusion and drone navigation: from data classification and people detection to Simultaneous Localization and Mapping (SLAM) fusing event cameras and radars.
Bio: Ali Safa received the MSc degree in Electrical Engineering from the Brussels Faculty of Engineering (ULB-VUB), Brussels, Belgium. He joined imec and the KU Leuven, Belgium in 2020 where he is performing research towards a PhD degree on the intersection between neuromorphic computing, sensor fusion and robotics for extreme edge applications. Currently, he is a visiting researcher in the Cauwenberghs lab at the Institute for Neural Computation, University of California at San Diego, La Jolla, CA, USA.
2023-1-26
Gravitational-wave observations of the Local Universe by high-performance computing
Maurice van Putten Department of Physics and Astronomy, Sejong University, Korea
CLICK HERE to view the recorded talk
Abstract: LIGO-Virgo-KAGRA (LVK) offers a radically new window to the Local Universe in gravitational waves (GW) complementing observations in EM-radiation and neutrinos. Multi-messenger observations at their full potential critically depend on high-performance computing (HPC) for an unbiased view sans model assumptions by, e.g., FFT-based butterfly matched filtering. We report on this exaFLOP challenge approached by heterogeneous computing over LAN/WAN with dynamical load balancing by synaptic processing. For the first time, this reveals GW-emission during a cosmological gamma-ray burst (GRB), GRB170817A, identified with spin-down of a rapidly rotating Kerr black hole by GW-calorimetry. A fraction of Type Ib/c supernovae (parents to normal long GRBs) may likewise be loud from similar central engines. This outlook is of interest to LVK O4 observations starting mid-2023, alongside supernova surveys of the Local Universe such as the Zwicky Transient Factory (ZTF, Caltech). With enhanced sensitivity, high-throughput analysis with low-latency parameter estimation calls for wafer-scale HPC, currently pioneered by Cerebras, Sunnyvale. (van Putten & Della Valle, 2023, A&A, 669, A36, https://doi.org/10.1051/0004-6361/202142974)
Biography: Maurice H. P. M. van Putten is a Professor of Physics and Astronomy at Sejong University and an Associate Member of the Korea Institute for Advanced Study (KIAS). He received his Ph.D. from Caltech and held postdoctoral research positions at ITP/UCSB and the CRSR/Cornell. He held faculty positions at MIT, Nanjing University and IAS/CNRS-Orleans. His current research focus is on multimessenger emissions from rotating black holes as central engines of gamma-ray bursts and core-collapse supernovae, and the H0-tension problem in cosmology. He is a member of the LIGO-Virgo-KAGRA collaboration, LISA and THESEUS.
2022-11-3
A critical review on the physiological interpretation of independent component analysis (ICA) applied to EEG data
Makoto Miyakoshi UC San Diego
CLICK HERE to view the recorded talk
Abstract: Swartz Center for Computational Neuroscience (SCCN) has been pioneering the use of independent component analysis (ICA) on human EEG data and played a central role in establishing the physiological interpretation of ICA. I will first review the premises of the physiological interpretation of ICA and supporting evidence based on published works between 2007-2012. Then I will challenge it on the following points: (1) Dependency on power distribution across frequencies—or how component rejection introduces high-frequency artifacts; (2) The accompanying small patch hypothesis and the relevant issues such as patch-size dependency on data SNR and depth bias; (3) The strong dimension reduction effect—the low dimensionality as a result of ICA may not necessarily mean revealing genuine property of the data. I conclude that the physiological interpretation of ICA on EEG contains several inappropriate claims due to lack of evidence and misunderstandings, which need to be corrected by adding appropriate limitations to its use and interpretation for ICA to become a truly general EEG analysis tool.
Bio: Dr. Makoto Miyakoshi is an EEG researcher working at SCCN, Institute for Neural Computation, University of California San Diego. He obtained his bachelor's degree in philosophy in 2003 (Waseda University) and PhD in psychology in 2011 (Nagoya University). In the same year, he joined SCCN as a post-doc. In 2017, he got promoted to be an assistant project scientist at SCCN. His research interests include methodological development for EEG research and also performing data analysis for various clinical and psychological projects: 37 papers published since 2020 including all kinds of works. He is a developer of computational tools for signal processing, statistical tests, and data visualizations. He is known for the SCCN Wiki page Makoto's preprocessing pipeline, which is a summary of more than 2,000 answers to technical questions posted to SCCN’s online mailing list. The page has been viewed over 217,000 times since October 2014.
2022-10-27
How well do neurons, humans, and artificial neural networks predict?
Sarah Marzen Claremont Colleges
CLICK HERE to view the recorded talk
Abstract: Sensory prediction is thought to be vital to organisms, but few studies have tested how well organisms and parts of organisms efficiently predict their sensory input in an information-theoretic sense. In this talk, we report results on how well cultured neurons ("brain in a dish") and humans efficiently predict artificial stimuli. We find that both are efficient predictors of their artificial input. That leads to the question of why, and to answer this, we study artificial neural networks, finding that LSTMs show similarly efficient prediction but do not model how humans learn well. Instead, it appears that an existing model of cultured neurons and a model of humans as order-R Markov modelers explain their performance on these prediction tasks.
Bio: Sarah Marzen is in the W. M. Keck Science Department at the Claremont colleges since 2019. She was a Physics of Living Systems postdoctoral fellow at MIT, and before that, a graduate student in the physics department at UC Berkeley under the supervision of Professor Mike DeWeese. Her research lab has three arms: using machine learning to better understand biological data, interpreting biological organisms as machine learners, and modeling cognitive and social science data to understand which interventions are likely to produce societally positive results.
2022-10-20
Single cell sequencing and computational models for charting the diversity of brain cells
Eran Mukamel UC San Diego
CLICK HERE to view the recorded talk
Abstract: Brain are made of neurons and glial cells with diverse connectivity, physiology, morphology, and gene expression. Cell type diversity is a hallmark of almost all brain circuits, and is characteristic of even the simplest nervous systems. I will discuss how single cell RNA and DNA sequencing have expanded our understanding of the extent of brain cell diversity. Data from single cell transcriptomics and epigenomics calls for new approaches for large-scale multimodal integration, robust cluster estimation, and modeling.
Bio: Eran Mukamel is an Associate Professor of Cognitive Science at UCSD. The Mukamel lab studies neuronal epigenomics and cell type diversity, using computational and bioinformatic methods to understand how brain cells develop and maintain their unique functional identities. Eran completed a PhD in Physics at Stanford University, and postdoctoral fellowships at Harvard’s Center for Brain Science and at the Salk Institute.
2022-10-13
Phase in space: Spatiotemporal cortical dynamics
Bard Ermentrout University of Pittsburgh
CLICK HERE to view the recorded talk
Abstract: We model various spatio-temporal patterns seen in band-pass filtered LFP in cortex during cognitive tasks and during rest. The experimental results often depict the patterns in terms of their phase motivating an analysis of equations of the form:
Θt = Ω(X) + ∫D W(X-Y)H((Y,t)-Θ(X,t)) dY
where D is a one or two-dimensional domain.Θ(X,t) is the phase at location X and Ω(X) is the intrinsic frequency in absence of coupling. W(X) is the strength of interaction and is a function of |X| distance. H(ɸ) is a periodic interaction function that describes how one local oscillator influences another. We study plane waves on 1 dimension and rotating waves in 2D.
Bio: PhD 1979 U Chicago (Jack Cowan). Postdoc 1979-82 NIH (John Rinzel). Faculty at the University of Pittsburgh since 1982.
2022-10-6
Mathematical modeling of human brain transport: from medical images to biophysical simulation
Marie E. Rognes Simula Research Laboratory, Oslo, Norway
Abstract: Your brain has its own waterscape: whether you are reading, thinking or sleeping, fluid flows through or around the brain tissue, clearing waste in the process. These biophysical processes are crucial for the well-being and function of the brain. In spite of their importance we understand them but little, and mathematical and computational modeling could play a crucial role in gaining new insight. In this talk, I will give an overview of mathematical, mechanical and numerical approaches to understand mechanisms underlying solute transport in the human brain. Topics include uncertainty quantification and optimal control, fluid-structure interactions, and mixed finite element discretizations and preconditioning.
Bio: Dr. Marie E. Rognes is Research Professor in Scientific Computing and Numerical Analysis at Simula Research Laboratory, Oslo, Norway and a Visiting Scholar at the University of California San Diego. She joined Simula Research Laboratory in 2009, after received her Ph.D from the University of Oslo in the same year, led its Department for Biomedical Computing from 2012-2016, and currently leads a number of research projects focusing on mathematical modelling and numerical methods for brain mechanics including an ERC Starting Grant in Mathematics (2017-2023). She won the 2015 Wilkinson Prize for Numerical Software, the 2018 Royal Norwegian Society of Sciences and Letters Prize for Young Researchers within the Natural Sciences, is a member of the Norwegian Academy of Technological Sciences, and a member of the FEniCS Steering Council.
2022-9-29
Delay Differential Analysis of EEG Data
Claudia Lainscsek Salk Institute
CLICK HERE to view the recorded talk
Abstract: DDA is a time-domain analysis framework based on dynamical systems theory to identify important nonlinear features underlying brain signals. It combines differential embeddings with linear and nonlinear nonuniform functional delay embeddings. Inspired by Planck’s “natural units”, the DDA model maps experimental data onto a basis of natural embedding coordinates. Since this low dimensional basis is built on the nonlinear dynamical structure of the data, preprocessing of the data (e.g., filtering) is not necessary.
In this talk Claudia will explain the concepts of DDA and how it is applied to the characterization of iEEG (intracranial electroencephalography) data obtained from human patients with intractable epilepsy.
Bio: Dr. Claudia Lainscsek studied physics is Graz/Austria and started working on global modeling from data during her PhD. After her PhD she moved to San Diego. She joined Terry's lab in 2010 and has been working on DDA analysis for various applications, such as Parkinson's EEG and movement data, schizophrenia EEG data, epilepsy iEEG data, and ECG (electrocardiogram) data since then.
2022-8-5
Fast Field-Programmable Coded Image Sensors for Versatile Low-Cost Computational Imaging
Roman Genov University of Toronto
CLICK HERE to view the recorded talk
Abstract: Next-generation image sensors will be the eyes of future smart technologies. Today’s smartphone cameras are already very good and inexpensive. This is mainly because they use computational photography to digitally enhance images by means of software processing. The key ingredient in computational photography is the so-called “computational burst imaging” – when many shots are taken and combined into one enhanced image. However, this approach fails in other applications where fast motion or rapidly changing illumination are present, such as drones, automated vehicles, surveillance, robotics and augmented reality. So, next-generation image sensors will have to rely on a combination of fast imaging and computation to tolerate motion and maintain the same low cost as that of smartphones.
Our solution is a new class of image sensors, coded-exposure-pixel (CEP) image sensors, that are motion tolerant, low-cost, and versatile, and as such are well-suited for robust computational imaging. The high exposure rate, with over 30,000 exposures per second, is at least a factor of 100X higher than the best cell phone camera. The readout speed is maintained beneficially low – at the standard video rate. As there is no high-speed video output, no expensive hardware is needed to handle it, so the cost can be less than 1% of the cost of conventional high-speed cameras. The slow readout also brings the benefits of low power dissipation and much lower required illumination. Additionally, our coded sensors are highly flexible in functionality and thus are application-agnostic. They are reconfigurable by the end-user in the field to implement one of many imaging techniques, simply by changing pixel codes in firmware. Application examples include single-shot structured-light 3D imaging, single-shot photometric-stereo 3D imaging, and single-shot high-dynamic-range imaging.
Biography: Roman Genov received the B.S. degree in electrical engineering from the Rochester Institute of Technology, NY, USA in 1996, and the M.S.E. and Ph.D. degrees in electrical and computer engineering from Johns Hopkins University, Baltimore, MD, USA in 1998 and 2003, respectively. He is currently a Professor in the Department of Electrical and Computer Engineering at the University of Toronto, Canada, where he is a Member of Electronics Group and Biomedical Engineering Group, and the Director of Intelligent Sensory Microsystems Laboratory. Dr. Genov's research interests are primarily in analog integrated circuits and systems for energy-constrained biological, medical, and consumer sensory applications. Dr. Genov is a co-recipient of Jack Kilby Award for Outstanding Student Paper at IEEE International Solid-State Circuits Conference, Best Paper Award of IEEE Transactions on Biomedical Circuits and Systems, Best Paper Award of IEEE Biomedical Circuits and Systems Conference, Best Student Paper Award of IEEE International Symposium on Circuits and Systems, Best Paper Award of IEEE Circuits and Systems Society Sensory Systems Technical Committee, Brian L. Barge Award for Excellence in Microsystems Integration, MEMSCAP Microsystems Design Award, DALSA Corporation Award for Excellence in Microsystems Innovation, and Canadian Institutes of Health Research Next Generation Award. He was a Technical Program co-chair at IEEE Biomedical Circuits and Systems Conference, a member of IEEE International Solid-State Circuits Conference International Program Committee, and a member of IEEE European Solid-State Circuits Conference Technical Program Committee. He was also an Associate Editor of IEEE Transactions on Circuits and Systems-II: Express Briefs and IEEE Signal Processing Letters, as well as a Guest Editor for IEEE Journal of Solid-State Circuits. Currently he is an Associate Editor of IEEE Transactions on Biomedical Circuits and Systems.
2022-5-26
Play fighting and the development of the social brain
Sergio Pellis University of Lethbridge
CLICK HERE to view the recorded talk
Abstract: For rats and some other animals it has been demonstrated that deprivation of social play in the juvenile period has major repercussions on the development of social skills. In terms of mechanisms, studies on rats have shown that play in the juvenile period is organized in a way that provides the critical psychological experiences needed for social skill development, and this enhancement is mediated by play-induced alterations in the anatomy and function of those areas of the prefrontal cortex (PFC) that are involved in emotional regulation and social decision making. The medial prefrontal cortex (mPFC) is directly influenced by the experiences derived from play and leads to improved ability to coordinate actions with a partner. In contrast, play indirectly influences the development of the orbitofrontal cortex (OFC) by exposing the rats to multiple partners and leads to improved ability to modulate behavior based on the partner’s identity. The life history profiles of the changes in these two areas of the PFC reflect their differing contributions to social skills.
Bio: Sergio M. Pellis received his PhD in animal behavior/ethology in 1980 from Monash University, Australia. He spent 1982-1990 at the University of Illinois, Tel Aviv University and University of Florida, where he received post-doctoral training in behavioral neuroscience and movement analysis. In 1990, he joined the University of Lethbridge, where he is a professor of animal behavior and neuroscience. A central focus of his research is on the evolution, development, and neurobiology of play behavior.
2022-5-12
Emergent cognition: From synaptic plasticity to "place cells" and spatial memory
Mayank Mehta Department of Physics & Astronomy, Neurology, ECE, UCLA
CLICK HERE to view the recorded talk
How does the brain create the perception of space? The prevailing hypothesis has been that a brain region called the hippocampus uses Hebbian synaptic plasticity to combine information from distal visual landmarks, resulting in "place cells" and a cognitive map. While extensive evidence supports parts of this complex hypothesis, the mechanisms of how perception of space emerges from coordinated plasticity across billions of synapses has remained a mystery. Electrophysiology in virtual reality and computational modeling has revealed surprising findings that address this, and point to deeper mysteries. The results have significant implications for VR users, and the diagnosis and treatment of neural disorders, esp. Alzheimer's.
2022-4-28
Acoustic stimulation of slow wave sleep and memory enhancement
Giovanni Santostasi Deepwave
CLICK HERE to view the recorded talk
Abstract:
A discussion of a noninvasive, adaptive method to enhance the process of memory consolidation during slow-wave sleep.
This technology is based on personalized intervention using real-time information from a single channel EEG and acoustic stimulation synchronized to the subject's Slow Wave activity during deep sleep.
2022-4-14
Why AI is harder than we think
Melanie Mitchell Santa Fe Institute
CLICK HERE to view the recorded talk
Abstract:
Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI Spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI Winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this talk I will discuss some fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I will also speculate on what is needed for the grand challenge of making AI systems more robust, general, and adaptable—in short, more intelligent.
Bio:
Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).
2022-3-31
Integrative modeling of Paramecium, a “swimming neuron”
Romain Brette Inserm
CLICK HERE to view the recorded talk
Paramecium is a unicellular organism that swims in fresh water using cilia. When it is stimulated (mechanically, chemically, optically, thermally, etc), it often swims backward then turns and swims forward again: this is called the avoiding reaction. This reaction is triggered by a calcium-based action potential. For this reason, it enjoyed a period of glory in the 1970s as a model organism for neuroscience. We have developed an integrative model that links electrophysiology and behavior, quantitatively constrained by experimental data. This model is a dynamical system coupled to the environment, which allows revisiting various neuroscientific themes (perception, adaptation, learning) in the context of an autonomous system, rather than within the stimulus-response paradigm.
2022-3-17
Basis Vectors of the Central Nervous System
Gordon Kruberg
CLICK HERE to view the recorded talk
The concept of a tuning curve is at the heart of our understanding of neurons: that one neuron’s firing pattern carries meaning, expressed in either firing rate or spike interval, or both. This in turn we use to interpret collections of neurons and we apply semantics to arrays of activities. However, neurons with multiple encodings have been characterized throughout the brain; their firing rate, even spike timing, may carry multiple possible meanings to their target neurons. This is particularly true where encoding is done by position within a sequence of spikes, by a cell subject to contextual remapping.
I am interested in how to understand the basis vectors of neuronal activity when we model firing rate and spike timing. I will discuss bimodal signaling, decoding by target cells, and raise some questions about transform operations in target state spaces. I will also discuss how this is relevant to sequences encoded in theta and gamma waves. I think these are relevant to our core understanding of central nervous system operation, and I look forward to your feedback in the chalk-talk forum.
2022-3-3
Physical reservoir computing for embodied intelligence
Kohei Nakajima University of Tokyo
CLICK HERE to view the recorded talk
Dynamical systems can be used as an information processing device, and reservoir computing (RC) is one of the recent approaches that can explore this perspective in practice. In this framework, a low-dimensional input is projected to high-dimensional dynamical systems, which are typically referred to as a reservoir. If the dynamics of the reservoir involve adequate nonlinearity and memory, then emulating the nonlinear dynamical systems only requires adding a linear, static readout from the high-dimensional state space of the reservoir. Because of its generic nature, RC is not limited to digital simulations of neural networks, and any high-dimensional dynamical system can serve as a reservoir if it has the appropriate properties. The approach using a physical entity rather than abstract computational units as a reservoir is called physical reservoir computing (PRC). Its various engineering applications have been proposed recently in all ranges of physics, from mechanical to quantum and photonics scales. In this presentation, the focus will particularly be on how the RC/PRC framework can provide a novel view of embodied intelligence and soft robotics.
2022-2-17
The future of artificial intelligence: 3-D silicon brain
Kwabena Boahen Stanford University
CLICK HERE to view the recorded talk
Artificial Intelligence requires high performance computing by very large-scale integrated circuits. Integrated circuits have continually improved their space and energy efficiency by tiling transistors ever more densely in two dimensions as computing units. The number of transistors crammed onto a silicon chip still doubles every two years, but now most of the energy budget goes for communication between computing units, thus reducing the benefits of further miniaturization. Stacking 2-D integrated circuits in the third dimension shortens wire length and cuts communication costs, thus conserving energy for computation. But stacking reduces surface area for dissipating heat, thereby restricting a 3-D processor to serial, rather than parallel operation. Here I propose a fundamental solution: sparsify and enrich signals by exchanging spatial patterns of binary-valued signals (e.g., high vs. low voltages) for spatiotemporal sequences of unary-valued signals (e.g., voltage pulses with fixed amplitude). Instead of a binary signal from a computing unit representing a 0 or a 1, a unary signal from an entire layer of, say, 1,000 units, would represent one of 1,000 different symbols. A sequence of 10 such signals would represent 10 digits of a base-1,000 number. Computing with these spatiotemporal sequences would require exchanging logic gates for devices that weight an input based on where and when it is received. These dendrite-like devices could allow a 3-D silicon brain to scale in energy and heat linearly with the number of silicon neurons—like a biological brain––and thus operate in parallel.
2022-2-3
Reduced, biophysically based, models for neurons to use as computationally efficient elements of large functional biological networks
Henry Abarbanel Scripps Institution of Oceanography/Department of Physics, UC San Diego
CLICK HERE to view the recorded talk
Using a combination of methods from applied mathematics and nonlinear dynamics, we present a constructive way to give a discrete time dynamical rule that accurately forecasts the voltage across a neuron cell membrane. This is the only quantity required to build a biological network of realistic neurons. The construction uses simulated `data' or observed biophysical data alone to develop the dynamical map. We call this data driven forecasting (DDF). The method is described in detail at first using `data' from simple neuron models and then using observed neurobiological data from laboratory experiments. It provides accurate forecasting of observed quantities in each setting.
In an example where a detailed Hodgkin-Huxley (HH) model was developed using data assimilation for observed laboratory observations the DDF neuron runs an order of magnitude faster than the HH version in forecasting the important neuron voltage time course. As the computation required for a network of N nodes will be faster by about a factor of 10N using DDF neurons, this will permit building and analyzing the very large networks desired to address realistic biological questions using elements determined via the biophysics of the component neurons.
If time permits, we will describe how one may use the DDF idea to substantially reduce the geophysical computations required for regional numerical weather forecasting.
2022-1-20
The PSYONIC Ability Hand - Advances in Commercial Sensorimotor Hand Prostheses
Aadeel Akhtar Psyonic
CLICK HERE to view the recorded talk
Commercially available upper limb prostheses have been far behind the state-of-the-art research that has been developed at academic institutions around the world. The Ability Hand was developed to take advances in soft robotics and sensorimotor prosthetics and make it available and accessible to people with upper limb amputations in the US and abroad. The Ability Hand is a multiarticulated bionic hand that is the fastest on the market, robust to impacts, and gives users touch feedback. It is also covered by Medicare in the US. This talk will detail the development of the Ability Hand, its current capabilities, and further advancements that will be coming in the near future.
2022-1-13
Voyager - Exploring Habana processor-based AI focused hardware for science and engineering
Amitava Majumdar San Diego Supercomputer Center, UC San Diego
CLICK HERE to view the recorded talk
The NSF funded Voyager machine at the San Diego Supercomputer Center (SDSC) will provide Intel company’s Habana Labs’ artificial intelligence (AI) training and inference accelerators to enable high-performance, high-efficiency AI focused research for a wide range of science and engineering domains. Voyager will have a combination of Habana AI processors and Intel Xeon Scalable CPUs in Supermicro servers. The Voyager system will contain 42 Supermicro X12 Gaudi AI Training Systems with 336 Habana Gaudi processors—designed for scaling large AI training applications—and 16 Habana Goya inference cards on two nodes, to power AI inference models. Habana Gaudi AI training processors and Goya inference cards are architected to drive performance and efficiency in AI operations. Gaudi AI processors natively integrate ten 100-Gigabit Ethernet ports of RoCE v2 (RDMA over Converged Ethernet) on-chip, enabling flexibility of scaling and avoidance of throughput bottlenecks that can limit scaling capacity. These will provide data scientists and researchers having access to Voyager with flexibility to customize models with programmable Tensor Processor Cores and kernel libraries, and ease implementation with Habana’s SynapseAI Software platform, which supports popular machine learning frameworks and AI models for applications such as vision, natural language processing and recommendation systems. The first three years of Voyager’s operation will be the Testbed Phase, during which SDSC will work with select research teams from astronomy, climate sciences, chemistry, particle physics biosciences and other fields to gain AI experience and insights leveraging Voyager’s unique features. Throughout the Testbed Phase, SDSC will share experiences with the AI research community and develop documentation that will serve as a resource for an expanded user base in years four and five. The INC chalk talk will describe the high-level architecture of the Voyager machine and the plans for various AI applications from science and engineering that will be implemented initially on the machine. We are interested in talking to researchers who have AI applications that can be implemented on the Voyager machine in collaboration with the Voyager team at SDSC.
2021-12-9
An Overview of Georgia Tech Analog and Neuromorphic Computing
Jennifer Hasler Georgia Tech
CLICK HERE to view the recorded talk
Abstract: This talk is meant to review Georgia Tech efforts in analog, neuromorphic, and physical computing as well as generate topics for discussion. Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are meeting hard physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation.
Neuromorphic techniques are of increasing interest along with other Physical computing directions, such as analog, quantum, and optical computation.
Understanding and developing computational theory of physical computation became relevant with the advent of large-scale Field Programmable Analog Arrays (FPAA) as well as other recent physical computing implementations. Digital computation is enabled by a framework developed over the last 80 years. Analog computing techniques result in 1000x improvement in power or energy efficiency, and a 100x improvement in area efficiency, compared to digital computation.
2021-11-18
Neurochiplets and Silicon Brains in 3D CMOS
Andreas Andreou Johns Hopkins University
CLICK HERE to view the recorded talk
Abstract: The brain is without doubt the world's most powerful computer for solving problems in machine perception (vision, speech, language) and machine learning. Brains exist in 2+delta -dimensional physical space yet are capable of efficiently solving problems in higher dimensional spaces. We believe that the network structure of the brain architecture in 2+delta dimensional space contributes significantly to its effectiveness and energy efficiency in cognition. At all levels of the central nervous system, from retina to the cortex, the tissue is organized in a hierarchy of layers. In certain layers there is an abundance of axons, the physical structures in neurons responsible for communication" while others are densely packed with cell bodies and dendrites, or what one will consider as the computational" structures in the tissue. Furthermore, the layers are tightly coupled vertically through what is termed in biology a column". Over the last half century computer scientists, architects and engineers have envisioned building computers that match the parallel processing capabilities of biological brains for perception and cognitive computing.
Three-dimensional integration through wafer stacking and 2.5D assembly is an alternative to technology scaling and monolithic integration that achieves increase in the number of transistors and short-range interconnect per unit area thus improving energy efficiency. To addres the challenges of rapid and flexible prototyping of large bioinspired systems for cognitive computing we abstract the 2+delta brain architectonics provide a guidance towards future development of silicon integrated systems for machine perception and learning that would be as effective and as efficient as biological brains. Neurochiplets SOC 2.5D architecture relies on this alternative approach to scaling, driven primarily by cost and flexible and rapid system level integration. 2.5D integration on a silicon interposer will interface the memory to neuromorphic chiplets and a commodity FPGA and processors (RISC-V) for operating system support and data I/O. In this talk I will discuss the design of three generations of bio-inspired 3D CMOS SOCs designed over a period of 15 years. I will present experimental data from the architectures and discuss successes and failures.
Biography: Andreas G. Andreou is a professor of electrical and computer engineering, computer science and the Whitaker Biomedical Engineering Institute, at Johns Hopkins University. Andreou is the co-founder of the Johns Hopkins University Center for Language and Speech Processing. Research in the Andreou lab is aimed at brain inspired microsystems for sensory information and human language processing. Notable microsystems achievements over the last 25 years, include a contrast sensitive silicon retina, the first CMOS polarization sensitive imager, silicon rods in standard foundry CMOS for single photon detection, hybrid silicon/silicone chip-scale incubator, and a large-scale mixed analog/digital associative processor for character recognition. Significant algorithmic research contributions for speech recognition include the vocal tract normalization technique and heteroscedastic linear discriminant analysis, a derivation and generalization of Fisher discriminants in the maximum likelihood framework. In 1996 Andreou was elected as an IEEE Fellow, “for his contribution in energy efficient sensory Microsystems.”
2021-10-27
Neuromorphic meets neuroprosthesis
Nitish Thakor Johns Hopkins University
CLICK HERE to view the recorded talk
Abstract: Neuroprosthesis is the subtopic of one of the most explored aspects of the Brain Machine Interface field. So far, attention has gone mostly to decoding neural signals and achieving prosthesis motor function and controls. The inverse, developing tactile sensors and encoding and providing sensory feedback and perceptions has been less studied and is now emerging as an active area of research. Tactile sensations are transduced generally by 4 receptors (SA/FA types I and II) and pain receptive nerve endings. Further encoding is done by the cuneate nucleus and the sensory cortex for the final sensory perception and cognition. I will first present our modeling work on the tactile system and how encoding may work in this multi-level nervous system. Next, I will present the design of bioinspired tactile sensors, e-dermis, that produce tactile signals encoded as spiking response of the receptors. Next, I will present the encoding and classifiers developed from the spiking, neuromorphic, tactile sensor used for grasping and palpation applications. I will present the results on tactile sensor enabled grasp and palpation of textures by a robotic/prosthetic hand. Finally, this tactile receptor system is tested on a prosthetic hand for sensory feedback to an amputee. I will present our results on sensory perception and cognition by amputees and describing the path towards building a functional sensory-enabled prosthesis. I will also share ongoing application examples, emerging solutions and future directions.
2021-8-19
Photonics for neuromorphic computing and artificial intelligence
Bhavin Shastri Queen's University, Ontario, Canada
CLICK HERE to view the recorded talk
Abstract: Artificial intelligence enabled by neural networks has enabled applications in many fields (e.g. medicine, finance, autonomous vehicles). Software implementations of neural networks on conventional computers are limited in speed and energy efficiency. Neuromorphic engineering aims to build processors in which hardware mimic neurons and synapses in brain for distributed and parallel processing. Neuromorphic engineering enabled by silicon photonics can offer sub nanosecond latencies, and can extend the domain of artificial intelligence and neuromorphic computing applications to machine learning acceleration (vector-matrix multiplications, inference and ultrafast training), nonlinear programming (nonlinear optimization problem and differential equation solving) and intelligent signal processing (wideband RF and fiber-optic communications). We will discuss current progress and challenges of neuromorphic photonics to scale to practical systems.
Biography: Prof. Bhavin J. Shastri is an Assistant Professor of Engineering Physics at Queen’s University and a Faculty Affiliate at the Vector Institute. He was an Associate Research Scholar (2016-2018) and Banting and NSERC Postdoctoral Fellow (2012-2016) at Princeton University. He received a PhD degree in electrical engineering (photonics) from McGill University in 2012. He is a co-author of the book Neuromorphic Photonics. Dr. Shastri is the winner of the 2020 IUPAP Young Scientist Prize in Optics "for his pioneering contributions to neuromorphic photonics" from the ICO. He is a Senior Member of OSA and IEEE.
2021-8-5
Electronic Brain-Machine Interfaces for Sensory Encoding
Xilin Liu University of Toronto
CLICK HERE to view the recorded talk
Abstract: Communicating with brains directly is among the most thrilling technological advancements in our era. Recent brain-machine interface research has made substantial progress on acquiring and decoding neural signals. However, sending signals back to the brains, e.g. encoding sensation and perception, remains a significant challenge. In this talk, I will present my research on electronic brain-machine interface design to tackle this challenge. Specifically, I will describe the integrated circuits (IC) and system design of an innovative brain-machine interface system that enables closed-loop sensory encoding experiments in freely behaving animals. Through collaboration with neuroscientists, I have successfully developed and validated miniaturized sensory neuroprosthetics in non-human primates and rodents. This technology holds great promise in restoring the sensory communication of patients suffering from paralysis, spinal cord injuries, various brain injuries and degenerative conditions. Moreover, the closed-loop brain-machine interfacing paradigm explored in this work is promising as an opportunity for future neuroscience research and clinical therapeutics. I will conclude this talk with my vision and future research plans to continue advancing the frontiers of BMI and IC technologies.
Bio: Xilin Liu is an Assistant Professor (starting Fall 2021) at the University of Toronto, Canada. He received his Ph.D. degree from the University of Pennsylvania, USA, in 2017. Before joining University of Toronto, he worked at Qualcomm Inc., USA. His research interests include mixed-signal integrated circuits (IC), algorithms and system design for emerging applications, especially brain-machine interfaces. He received the IEEE Solid-State Circuits Society (SSCS) Predoctoral Achievement Award in 2016. His first-author papers have been recognized as the Best Student Paper Award on the 2017 International Symposium on Circuits and Systems (ISCAS), the Best Paper Award (1st place) on the 2015 Biomedical Circuits and Systems Conference (BioCAS), and the Best Paper Award of the biomedical track on the 2014 ISCAS. He was also the recipient of the Student-Research Preview Award on the 2014 IEEE International Solid-State Circuits Conference (ISSCC). His industrial experience includes contributions to a series of top-tier IC products including the world’s first commercial 5G chipset.
2021-6-17
Material and devices for unconventional computing
Tamalika Banerjee University of Groeningen (CogniGron)
Designing materials and devices for energy efficient hardware, are actively researched for computing tasks beyond von Neumann. This necessitates materials whose characteristics can be controllably changed by an external stimulus such as temperature, voltage, current, electric field, magnetic field, spin orbit torque etc. Resistance states of the devices, typically exhibits multistate behavior for the same control stimulus and in this way information can be stored as conductance. This yields computing primitives relevant for unconventional computing such as those used in neuromorphic, probabilistic and quantum computing schemes.
In this context, we have exploited the rich phase space, intrinsic to complex oxides, for electronic property tunability, primarily of multistate resistance, by electric field and magnetization control. The control is achieved by designs that rely on the interplay between the spin, charge and lattice degrees of freedom in these material systems. I will discuss a couple of memristive device types, where tailoring of the energy landscape enables multistate resistance states. The first work that I will discuss is based on interface based memristive devices on unconventional oxide semiconductors, Nb doped SrTiO3. We have demonstrated that using magnetic electrodes, tailored energy landscape at the Schottky interface exhibits multi-level switching between highly resistive states by tuning the interface electric field. The resistance variations are up to three orders of magnitude with read powers in the nW regime. In the second type of devices, we utilize the crystal orientation and show that magnetic anisotropy in tailored SrRuO3 (SRO) ferromagnetic layers can be tuned to either exhibit a perfect or slightly tilted perpendicular magnetic anisotropy (PMA). We show that the strong magnetocrystalline anisotropy in SRO not only allows for the design of a perpendicular magnetic anisotropy in such devices but enables the tailoring of easy axes at controlled tilt angles from the surface normal, for probabilistic as well as deterministic switching, with relative ease. Our findings in tailoring the anisotropy potentially opens avenues for probabilistic and deterministic current-induced magnetization switching , with substantial control in such solid-state devices. The benefit of this approach lies in the simplicity of the device design and in the scalability, compared to conventional PMA devices whose operation relies on a multitude of layers and with concerns on thermal stability when downsized.
The design considerations for different tasks utilizing such materials is an open question which we like to explore and discuss for unconventional computing schemes.
2021-6-3
Robust Forecasting through Random "Reservoir" Networks
Jason Platt and Randall Clark UC San Diego
CLICK HERE to view the recorded talk
Reservoir computers are a form of recurrent neural network (RNN) that may be used for forecasting from time series data. These can be multivariate or multidimensional data from a diverse set of physical processes, enabling applications from the geosciences to biophysics. As with all RNNs, selecting the hyperparameters for the reservoir network presents a challenge when training on a new input system. We present a systematic method that gives direction in designing and evaluating the architecture and hyperparameters of a reservoir computer—with the goal of achieving accurate prediction/forecasting after training—based on generalized synchronization and tools from dynamical systems theory. Furthermore, we provide a metric for robust forecasting using the geometry of the phase space dynamics and the reproduction of the input system's Lyapunov Exponent spectrum. The traditional ML approach of using a test dataset is not sufficient for dynamical systems forecasting. These techniques open the possibility of using data driven methods in a wider variety of tasks and we present specific results for neurobiological data.
2021-5-20
Experimental control architecture of Humans for Robots
Rupert Young Perceptual Robots
CLICK HERE to view the recorded talk
Feedback control systems are simple, though underrated, self-correcting, adaptive mechanisms. This talk will discuss how such systems, that, particularly, control their perceptions, are the basis for behaviour and intelligence within living systems. When arranged in hierarchies they provide dynamic powerful solutions to complex behavioural scenarios without the need for internal predictive models. I will show some demonstrations of the approach applied to simulated and real robotics systems and present some new, preliminary work on how the architectures can self-organise, through evolutionary processes.
However, my main purpose of the chalk talk is to discuss two questions with you: (1) what don’t we have in our system that you would need to make it useful; and (2) the more fundamental one – what should we be emulating with such hardware?
2021-5-6
Emulating billions of spiking neurons on hardware
André Van Shaik Western Sydney University
In this chalk talk, I will describe some of our work on large scale emulation of spiking neural networks on reconfigurable hardware. On a high-end BittWare 520N-MX FPGA board with 192M on-chip SRAM, 16GB High Bandwidth Memory and 256GB DDR4 memory, which costs around $15k, we can simulate up to 16 billion current-based LIF neurons with 2 trillion STDP synapses (1-bit weights). At a 1ms time step and assuming a 1% activity rate, this would run approximately 200 times slower than real time. More realistically, we can simulate on this board 128 million neurons, each with 4096 STDP synapse and 4-bit weights in real time, again assuming a 1% activity rate in the network. Even on much cheaper boards that cost only a few hundred dollars, we can simulate millions of neurons and billions of synapses in real time. I will describe some of the tricks and shortcuts we have used to make this possible.
However, my main purpose of the chalk talk is to discuss two questions with you: (1) what don’t we have in our system that you would need to make it useful; and (2) the more fundamental one – what should we be emulating with such hardware?
2021-4-29
Finding the Gap
Elisabetta Chicca University of Groningen
Collision free navigation is key for survival for most animals. The underlying neuronal machinery is asynchronous and deals with events occurring when changes are sensed by the animal. How such machinery can yield robust behaviour in a variety of environments remains unclear. In the fly brain, motion-sensitive neurons indicate the presence of nearby objects and are known to provide the basis for collision free navigation. Inspired by the fly brain, we model, for the first time, a neuromorphic system mimicking essential behaviours observed in flying insects, including meandering in clutter and crossing of gaps, which are highly relevant for autonomous vehicles. We implemented the resulting closed-loop system both in software and on neuromorphic hardware. While moving through an environment, an agent perceives changes in its surroundings and uses these changes to act with the goal of not colliding.The agent’s manoeuvres result from a closed action-perception loop implementing probabilistic decision making processes.This loop-closure is thought to have driven the development of neural circuitry in biological agents and is thus a fundamental requirement to understand neural computation in artificial agents. By closing the loop in neuromorphic systems, we get closer to understanding and modelling biological intelligence. With these investigations we anticipate to leverage the full potential of neuromorphic systems and hence set the foundations for neuromorphic intelligence in the future. Our system can serve to deepen the understanding of processing in neural networks and their computations in both biological and artificial systems.
2021-4-22
Spiking control systems
Rodolphe Sepulchre Cambridge University
CLICK HERE to view the recorded talk
The talk will address the question of developing a control theory for spiking systems. The motivation stems from the scientific question of understanding how nervous systems achieve reliable functions in spite of variability and the engineering question of designing reliable neuromorphic systems out of uncertain hardware components. The proposal is that spiking is the result of a mixed feedback principle. I will highlight the challenges and opportunities of this principle for control, learning, and computation.
2021-3-11
Recurrent processing improves occluded object recognition and gives rise to perceptual hysteresis
Jochen Triesch Frankfurt Institute for Advanced Studies
CLICK HERE to view the recorded talk
For a long time, object recognition was viewed as a mostly feedforward process. This view was supported by the fast response times in psychophysical and neurophysiological experiments and the success of deep feedforward neural networks for object recognition. Recently, however, this prevalent view has shifted and recurrent connectivity in the brain is now believed to contribute significantly to object recognition — especially under challenging conditions including the recognition of partially occluded objects. Moreover, recurrent dynamics might be the key to understanding perceptual phenomena such as perceptual hysteresis. In this work we investigate if and how artificial neural networks can benefit from recurrent connections. We systematically compare architectures comprised of bottom-up (B), lateral (L) and top-down (T) connections. To evaluate the impact of recurrent connections for occluded object recognition, we introduce three stereoscopic occluded object datasets, which span the range from classifying partially occluded hand-written digits to recognizing 3D objects. We find that recurrent architectures perform significantly better than parameter-matched feedforward models. An analysis of the hidden representation of the models reveals an interesting relationship between occluded and un-occluded stimuli and suggests that occluders are progressively discounted in later time steps of processing. We demonstrate that feedback can correct initial misclassifications over time and that the recurrent dynamics lead to perceptual hysteresis. Overall, our results emphasize the importance of recurrent feedback for object recognition in difficult situations.
2021-1-28
Prioritized experience replay supports planning and non-local learning
Marcelo Mattar UC San Diego
To make decisions, we must evaluate candidate choices by accessing memories of relevant experiences. Yet little is known about which experiences are considered or ignored during this process, a core question that ultimately determines one's choices. In this talk, I will describe some principles by which we use our memories to plan and decide. First, I will present a normative theory predicting which memories should be ideally accessed at each moment to optimize one's future decisions. Using nonlocal “replay” of spatial locations as a window into planning, I will show simulations of a spatial navigation task where an ideal agent accesses memories of locations sequentially, ordered by utility: how much additional reward can be earned due to better choices. We find that this theory offers a simple explanation for the role of memory in planning, explaining various empirical results regarding the content, directionality, and function of hippocampal replay. I will then present supporting evidence from a magnetoencephalography (MEG) experiment in humans. Using a sequential decision task, we observe significant backward replay when subjects receive a reward, and that this replay facilitates learning of action values. We also find that backward replay, and behavioral evidence of non-local learning, are more pronounced in states where credit assignment is of greater benefit for future behavior, as predicted by our proposed theory. Overall, our findings establish rationally targeted non-local replay as a neural mechanism for solving complex credit assignment problems in reinforcement learning.
2021-1-21
Rhythm generation and control in the mammalian breathing system
Peter J. Thomas Dept. Mathematics, Applied Mathematics & Statistics, Case Western Reserve University
CLICK HERE to view the recorded talk
Dr. Peter Thomas presented the second part of his talk from 1/14/21, and discussed rhythm generation and control in the mammalian breathing system.
The central nervous system is strongly coupled to the body. Through peripheral receptors and effectors, it is also coupled to the constantly changing outside world. A chief function of the brain is to close the loop between sensory inputs and motor output. It is through the brain's effectiveness as a control mechanism for the body, embedded in the external world, that it facilitates long-term survival. Studying closed-loop brain-body interactions is challenging experimentally, conceptually, and mathematically. In order to make progress, we focus on systems that generate rhythmic behaviors in order to accomplish a quantifiable goal, such as maintaining different forms of homeostasis. Time permitting, I'll mention two such projects, 1. control of feeding motions in the marine mollusk Aplysia californica, and 2. rhythm generation and control in the mammalian breathing system. In both of these systems, we propose that robustness in the face of variable metabolic or external demands arises from the interplay of multiple layers of control involving biomechanics, central neural dynamics, and sensory feedback.
Joint work with: Hillel J. Chiel, Jeffrey P. Gill & Zhoujun Yu, CWRU Casey Diekman, New Jersey Institute of Tech.Victoria Webster-Wood, Carnegie Mellon University, Christopher G. Wilson, Loma Linda University
2021-1-14
Neural circuitry for multilayered motor control
Peter J. Thomas Dept. Mathematics, Applied Mathematics & Statistics, Case Western Reserve University
CLICK HERE to view the recorded talk
The central nervous system is strongly coupled to the body. Through peripheral receptors and effectors, it is also coupled to the constantly changing outside world. A chief function of the brain is to close the loop between sensory inputs and motor output. It is through the brain's effectiveness as a control mechanism for the body, embedded in the external world, that it facilitates long-termsurvival.
Studying closed-loop brain-body interactions is challenging experimentally, conceptually, and mathematically. In order to make progress, we focus on systems that generate rhythmic behaviors in order to accomplish a quantifiable goal, such as maintaining different forms of homeostasis. Time permitting, I'll mention two such projects, 1. control of feeding motions in the marine mollusk Aplysia californica, and 2. rhythm generation and control in the mammalian breathing system. In both of these systems, we propose that robustness in the face of variable metabolic or external demands arises from the interplay of multiple layers of control involving biomechanics, central neural dynamics, and sensory feedback.
Joint work with: Hillel J. Chiel, Jeffrey P. Gill & Zhoujun Yu, CWRU Casey Diekman, New Jersey Institute of Tech.Victoria Webster-Wood, Carnegie Mellon University, Christopher G. Wilson, Loma Linda University
2020-12-10
Evidence for a novel specialized area within the human macula in which cone signals specify ocular focus for lens accommodation: the FOCAL ANNULUS
Marion S. Eckmiller Vogt Institute for Brain Research, Heinrich Heine University Clinic Düsseldorf, Düsseldorf, Germany
CLICK HERE to view the recorded talk
To obtain information about the external world, humans rely heavily on sharply-focused, high acuity central vision, which can only be provided by the specialized retinal macula and fovea. Research has shown that the response of foveal cone photoreceptors to light provides the sensory signals that are used to adjust lens accommodation and to thereby focus the eye, but the specific cone signals used and the mechanism(s) involved are not known. I have incorporated modern optical, morphological, and physiological findings in a theoretical ray-tracing analysis of light paths through the human macula and discovered that one annular area (located at the eccentricity of the foveal slope) in the fovea has unique optical properties. Within this annulus, light entering the retina is slightly bent or refracted (by ~ 0.32°) towards the periphery. As this light then travels thorough the birefringent Henle Fiber Layer, it is split by double refraction into orthogonally polarized Ordinary rays and Extraordinary rays. Thus, the light incident on the optical apertures of cones within this annulus is partially plane-polarized and arrives from a shifted anterior direction. These cones are expected to change the alignment direction of their long inner segment-outer segment axis to match the shifted light they receive. Based on the Scheiner Principle, having this shifted direction of alignment can enable the different types of cones in the annulus to individually utilize longitudinal chromatic aberration and to collectively specify ocular focus. I have termed this novel area the Foveal Oculomonitor Cone Alignment Locus or "FOCAL" ANNULUS, and I propose that signals from the subset of cones within this annulus specify ocular focus for lens accommodation, and likely also for ocular emmetropization.
After presenting these findings, I will show how integrating them with previous findings about central vision leads logically to a parsimonious, multifaceted, sophisticated model for FOCAL ANNULUS function in human eyes. I will also explain why, as an inevitable consequence of the sophisticated FOCAL ANNULUS in human eyes, we make involuntary, unconscious, fixational microsaccades. The foveas of Macaque monkeys are known to differ from human foveas with respect to specific morphological details that are relevant to microsaccades; this difference suggests that the FOCAL ANNULUS is less developed in monkey eyes, and that a sophisticated FOCAL ANNULUS may be a HUMAN-SPECIFIC ocular feature. Assuming the FOCAL ANNULUS is required for sharp, well-focused human vision, its dysfunction is expected to result in diverse disturbances of central vision (e.g., macular degeneration), disorders of refraction (e.g., myopia) in mature and growing eyes, and visual problems from viewing some technological devices (e.g., digital screens and Virtual Reality displays). Understanding how the signals from the subset of cones in the FOCAL ANNULUS of the human eye can specify ocular focus brings clinicians, visual neuroscientists, and bioengineers an important step closer to preventing and/or treating diverse visual disturbances involving the macula, disorders of ocular refraction, and visual problems associated with using certain technological devices.
2020-12-03
Biomimetic Spiking Neural Network and Real-Time Closed-Loop Bio-Hybrid Systems
Timothée Levi IMS, University of Bordeaux - CLICK HERE to view the recorded talk
+ moreMillions of people worldwide are affected by neurological disorders which disrupt connections between brain and body causing paralysis or affect cognitive capabilities. Such a number is likely to increase in the next years and current assistive technology is still limited. Since last decades Brain-Machine Interfaces (BMIs) and generally neuroprosthesis have been object of extensive research and may represent a valid treatment for such disabilities. The realization of such prostheses implies that we know how to interact with neuronal cell assemblies, taking into account the intrinsic spontaneous activity of neuronal networks and understanding how to drive them into a desired state or to produce a specific behavior. The long-term goal of replacing damaged brain areas with artificial devices also requires the development of Spiking Neural Network (SNN) system. They will fit with the recorded electrophysiological patterns and will produce in their turn the correct stimulation patterns for the brain so as to recover the desired function.
I will first describe the implementation of biologically realistic neural network models, spanning from the electrophysiological properties of one single neuron up to network plasticity rules. This digital implementation computes in real-time biologically realistic cortical neurons and motor neurons, synapses and synaptic plasticity. It is freely configurable from an independent-neuron configuration to different neural network configurations. This SNN has been used for the development of a neuromorphic chip for neuroprosthesis, which has to replace or mimic the functionality of a damaged part of the central nervous system.
I will then describe bio-hybrid system and show some bio-hybrid experiments using SNN for real-time bidirectional communication with living neurons.
Reference for SNN description:
https://www.frontiersin.org/articles/10.3389/fnins.2019.00377/full
References for bio-hybrid system:
https://www.sciencedirect.com/science/article/pii/S2589004219302731
https://www.sciencedirect.com/science/article/pii/S2589004220307811
https://www.nature.com/articles/s41598-020-63934-4
2020-10-29
Emergence of a Mutualistic Relationship Between Motion Planning and Machine Learning for Scalable Robot Control
Ahmed Qureshi University of California San Diego
+ morePlanning algorithms for control, also known as Motion Planning, has a long history ranging from methods with complete to probabilistic worst-case guarantees. However, despite having deep roots in artificial intelligence, these methods tend to be computationally inefficient in high-dimensional problems. On the other hand, machine learning advancements have led toward systems that can perform complex decision-making by directly using the raw sensory information. In this talk, I will discuss a new class of scalable, efficient planning methods called Neural Motion Planners that emerged from the cross-fertilization of machine learning and motion planning and exhibit worst-case theoretical guarantees when solving high-dimensional, practical robot control tasks.
2020-10-15
Neuromorphic Engineering of Visual Perception on Hardware
Rajkumar Kubendran University of Pittsburgh
+ moreNeuromorphic engineering pursues the design of electronic systems emulating function and structural organization of biological neural systems in silicon integrated circuits that embody similar physical principles and optimized for extreme energy efficiency. It aims at advancing adaptive, ubiquitous sensing and event-based processing of multi-modal data (visual, audio, thermal) for intelligent decision making in highly resource-constrained environments (edge computing). In this talk, I will focus on Neuromorphic VLSI architectures and algorithms for the implementation of vision sensors and processors, emulating the retina and visual cortex. In particular, I will present my work on a query-driven dynamic vision sensor, a CMOS-RRAM computer-in-memory processor and inverted STDP learning algorithm for temporal pattern recognition. Combining these works, we can take a step closer towards achieving integrated visual cortical processing on silicon hardware.
2020-06-18
Ultra-Miniaturized, Implantable and Wireless Data Acquisition and Actuation Systems
Ralph Etienne-Cummings John Hopkins University
+ moreThere are many situation where substantial high-resolution data must be communicated over low bandwidth channels. These include ubiquitous distributed video, implanted high-density neural recordings, gastric monitoring ePills, and other such power constrained systems. To meet the communication specification of these systems, significant signal processing is required at the sensor in order to extract relevant information and/or to compress the data that must be communicated. Compressed sampling and reconstruction provide an efficient way to squeeze large amounts of data into the narrow communication pipes. For example, we show that we can recover 100 frames of video from a single coded frame without motion blurring. We also show that the coded video can be used for image enhancement, object recognition and other image processing even before reconstruction. On the neural recording front, we show high levels of compression that preserve both spike timing information and inter-spike signal integrity. This talk will show how compressed sampling and reconstruction can be used to communicate video and neural signals in these cases. Furthermore, we will show how ultra-miniaturized wireless cameras and implantable neural recording/stimulation devices take advantage of this method. Lastly we will discuss how this technology can be used to develop distributed, un-tethered sensing and actuation modules that can be injected throughout the body to monitor a variety of biomarkers for personalized healthcare systems.
2020-06-04
Relaxation and Transient Flows in the Neural Dynamics of Movement Control
Paolo Del Giudice Italian Institute of Health
+ moreDensity-based clustering (DBC) provides efficient representations of multidimensional time series, allowing to cast them in the form of the symbolic sequence of the labels identifying the cluster to which each vector of instantaneous values belong. Such representation naturally lends itself to obtain compact descriptions of multi-dimensional neural activity (Baglietto et al, Plos One 2017).
We used DBC to analyze the spatio-temporal dynamics of dorsal premotor cortex during a ‘countermanding’ reaching task, whereby the animal must perform a reaching movement to a target on a screen (‘no-stop trials’), unless an intervening stop signal prescribes to withhold the movement (‘stop-trials’); no-stop (~70%) and stop trials (~30%) were randomly intermixed, and the stop signals occurred at variable times within the animal’s reaction time.
Multi-unit activity (MUA) was extracted from signals recorded using a 96-electrodes array. Performing DBC on the 96-dimensional MUA time series, we derived the corresponding discrete sequence of clusters’ centroid. The joint analysis of cluster sequences for no-stop and stop trials shows that reproducible short cluster sequences are associated with the completion of the motor plan in no-stop trials, and that in stop trials the performance depends on the relative timing of such states and the arrival of the Stop signal.
We show that a machine learning classifier can reliably predict the outcome of stop trials from the cluster sequence preceding the appearance of the stop signal, at the single-trial level.
We also observe that, consistently with previous studies, the inter-trial variability of MUA configurations typically collapses around the movement time, and has minima corresponding to other behavioral events (Go signal; Reward).
Comparing the time profile of MUA inter-trial variability with the cluster sequences, we are led to ask whether the neural dynamics underlying the clusters sequence can be interpreted in terms of jumps between metastable states, as suggested in other contexts. For this purpose we analyze the flow in the MUA configuration space, where each trail corresponds to a trajectory in the 96-dimensional MUA space, and repeated trials form a bundle of trajectories, of which we can compute individual or average properties. We measure simple quantities suited to discriminate between a dynamics of convergence of the trajectories to a point attractor, from different flows in the MUA configuration space. We tentatively conclude that convergent relaxation dynamics (in attentive wait conditions, as before the Go or the Reward events) coexist with coherent flows (associated with movement onset), in which low inter-trial variability of MUA configurations corresponds to a collapse in the direction, speed and spread of the flow, like the system entering a funnel.
2020-05-21
The Geometry of Abstraction in Artificial and Biological Neural Networks
Stefano Fusi Columbia University - CLICK HERE to view the recorded talk
+ moreThe curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. We characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.
2020-04-23
The SpiNNaker Project
Steve Furber University of Manchester (UK) - CLICK HERE to view the recorded talk
+ moreThe SpiNNaker (Spiking Neural Network Architecture) platform has been developed to support real-time modelling of large-scale biological neural networks. It currently incorporates a million ARM processor cores with a bespoke interconnect fabric specifically designed to enable the very high connectivity of biological brains to be modeled. As neuron and synapse models are implemented in software, SpiNNaker is very flexible, and it can be used to model novel neuron models and learning rules. The SpiNNaker platform is openly accessible under the auspices of the EU Flagship Human Brain Project, and is currently being used to support a wide range of neuroscientific research. A second generation machine is also under development.
2020-02-06
Data Assimilation and Machine Learning as Statistical Physics Problems: Deepest Learning
Henry D.I. Abarbanel UC San Diego, SIO
+ moreTransferring information from observations to models of those observations has long been a core practice in numerical weather prediction; it is called data assimilation. It is a problem in statistical physics. The same problem is posed in many machine learning settings. It is actually equivalent to data assimilation, and thus is also a statistical physics problem. We will discuss both problems and show their equivalence. Examples from each, using instructive models, will be presented. In variational formulations of the problem, we will show how to use annealing in the precision of the model to identify the minima of the cost function (action). These depend strongly on the amount of information in the data and the structure of the model. Similar precision annealing in Monte Carlo analyses of the problem will be discussed. We will show how one can identify models that are consistent with the data and which may be expected to give good predictions (generalization). The method also identifies just how much data is required for the task a machine learning models required to perform. Each challenge in data assimilation or machine learning requires: (1) well curated data, (2) appropriate models, and (3) accurate and efficient learning/training methods. This talk focuses on the third element as a tool to validating and improving the model given the data. Both problems take on enhanced importance when the available data is very rich in information and the computational challenges in making and training models is substantial.
2020-01-23
Surrogate Gradient Learning in Spiking Neural Networks
Emre Neftci University of California Irvine
+ moreSpiking neural networks are nature’s versatile solution to fault-tolerant and energy-efficient signal processing. Like conventional neural networks, spiking neural networks can be trained on real, domain-specific data. However, their training requires overcoming several challenges linked to their binary and dynamical nature. In this chalk talk, I will present Surrogate Gradient (SG) learning, a family of methods that bridge machine learning and spiking neural networks, and overcome these challenges. These methods enable a general learning scheme that is agnostic to the “neural code” (rate vs. spike-time), is compatible with any multidimensional neuron dynamics, can be interpreted as local three-factor plasticity rules and are fully compatible with existing machine learning frameworks.
SG methods provide a game-changing opportunity to understand the functional architecture of the brain from spikes to behavior by relating behavioral metrics to local parameter dynamics. Furthermore, because the full dynamics of the biological neural network can be optimized, SG methods can pave the way towards detailed, time-resolved comparisons between artificial and biological models.
2020-01-09
Consciousness And Quantum State Collapse
Sir Roger Penrose FRS University of Oxford
+ moreDoes consciousness arise through computation in the brain or is it due to a deeper physical process? In the early 20th Century, Kurt Gödel and Alan Turing explored this question. Arguments based on their ideas show that "meaning" and "understanding" cannot be encapsulated by merely following computational rules. The proposed underlying non-computation process – Orch-OR (orchestrated objective reduction) developed in conjunction with Stuart Hameroff of the University of Arizona – incorporates a suggested solution to the quantum measurement problem. Instead of resorting to the “many-worlds” of Everett or the common pragmatic “shut up and calculate” attitude, we propose that the quantum state must collapse to make it consistent with General Relativity. Recent improvements in the engineering of Bose-Einstein Condensates, and ideas due to Ivette Fuentes, of the University of Nottingham, point to a way to test this hypothesis. Using these ultra-cold materials, we should be able to see what happens at the boundary between quantum mechanics and general relativity as the objective reduction threshold is approached.
2019-12-12
Brains as Complex Systems. Multiscale Sources of EEG: Genuine, Equivalent, and Representative
Paul Nunez Tulane University - CLICK HERE to view the recorded talk
+ moreBrain scientists generally support the idea of high brain complexity, but may then carry out experimental design, data collection, analysis, and conclusions as if brains are actually simple systems. While simple brain models can be very useful, I argue here that brain research can be profitably supported by employing more explicit recognition that brains are genuine complex systems. With this background in mind, several common features of complex systems are reviewed, especially multiscale aspects of dynamic behavior. Possible implications for brain research are considered. Specifically, neocortical sources are defined over a range of microscopic and macroscopic scales. “Source” localization is discussed with distinctions made between genuine, equivalent, and representative sources of EEG. I suggest here that brain dynamic measures like source location, synchrony, functional connectivity, and so forth can be expected to be scale-sensitive and cannot be generally interpreted in absolute terms.
2019-12-03
Thermodynamic Neural Network
Todd Hylton Contextual Robotics Institute, Jacobs School of Engineering, UC San Diego
+ moreIn this talk I will present a recently developed, thermodynamically motivated neural network model that self-organizes to transport charge associated with internal and external potentials while in contact with a thermal bath. Isolated networks show multiscale dynamics and evidence of phase transitions, and externally driven networks evolve to efficiently connect external positive and negative potentials. The model integrates techniques for rapid, large-scale, reversible, conservative equilibration of node states and slow, small-scale, irreversible, dissipative adaptation of the edge states as a means to create multiscale order. All interactions in the network are local and the network structures can be generic and recurrent. The model integrates concepts of conservation, potentiation, fluctuation, dissipation, adaptation, and equilibration to illustrate the thermodynamic evolution of organization in open systems. A key conclusion of the work is that the transport and dissipation of conserved physical quantities drives the self-organization of open thermodynamic systems.
2019-10-10
Breaking the Curse Yet Again: Toward Efficient Deep Learning
Xin Wang Cerebras Systems
+ moreDeep neural networks' recent success is as unexpected as it is groundbreaking. The surprise is twofold. First, training of large deep nets, a high-dimensional non-convex optimization problem, turns out to be unreasonably easy. Second, grossly over-parameterized models, with orders of magnitude more parameters than there are data examples, generalize unreasonably well. The curse of dimensionality seems no more. These double miracles have engendered an unchecked growth of deep neural net models, and with it, an exploding demand for computational resources. Algorithmic blessings turned into a computational curse. How much further can the brute-force upscaling of general purpose computing hardware sustain? Or rather, shall we seek to make deep neural nets more efficient, perhaps inspired by the operating principles of the biological brain, and build specialized devices to accelerate them? What holds the promise in breaking the curse yet again? In this talk, I will present some new results from my work that are hopefully examples of efforts of the latter kind, and with these, conjecture a possible path forward.
2019-05-16
Deep learning based restoration of undersampled data from point scanning imaging systems
Linjing Fang
The Salk Institute for Biological Studies
+ morePoint scanning imaging systems are among the most widely used tools for cellular and tissue imaging. Like many other imaging modalities, their utility can be heavily constrained by sample damage and imaging speed. One method to deal with these issues is compressed sensing, which involves structured subsampled acquisition and post-acquisition image processing, which can be laborious, requires access to hardware settings not always available on the most widely used commercial systems, may result in undesirable artifacts, and has limited capabilities for matching full resolution acquisitions. We have employed a “compressed sensing” approach to increase image resolution by applying Deep Convolutional Neural Networks (DNNs) to upsample subsampled images, which effectively enables higher acquisition speeds as well as lower light doses. Oversampled “ground truth” images that were acquired from an Airyscan laser scanning confocal system or a scanning electron microscope, together with digitally downsampled as well as manually acquired undersampled images, formed image pairs for training and testing. In order to increase the efficiency of generating low vs high resolution training data, we used manually acquired image pairs to generate a model for downsampling large amounts of high resolution data. Testing was performed by comparing the resolution of the upsampled output from the DNN with its corresponding “ground truth” high-resolution image. Qualitative results through perceptual evaluation showed high fidelity between the network output of processed undersampled images and the oversampled high-resolution images, demonstrating the feasibility of this approach, which was further substantiated by the state-of-the-art quantitative DNN image superresolution metrics: Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). The ability to undersample images allow us to generate higher resolution and SNR datasets on the 3View serial block face SEM system than would otherwise be possible, facilitating higher quality and higher throughput 3DEM imaging. Current efforts are underway to expand the application of DNNs to other imaging and processing modalities including correlative imaging.
2019-02-21
Neural mechanisms supporting mindfulness-based pain relief
Fadel Zeidan UC San Diego
+ morePain is a multidimensional experience that involves sensory, cognitive, and affective factors. The constellation of interactions between these factors renders the treatment of chronic pain challenging and financially burdensome. Further, the widespread use of opioids to treat chronic pain has led to an opioid epidemic characterized by exponential growth in opioid misuse and addiction. The staggering statistics related to opioid use highlight the importance of developing, testing, and validating fast-acting nonpharmacological approaches to treat pain. Mindfulness meditation is a technique that has been found to significantly reduce pain in experimental and clinical settings. Dr. Zeidan, UCSD Assistant Professor of Anesthesiology, will delineate findings from recent work in his and other laboratories demonstrating that mindfulness meditation significantly attenuates pain through multiple, unique psychological, physiological and neural mechanisms that are distinct from placebo analgesia.
Brief Bio:
Dr. Fadel Zeidan, Ph.D. is an assistant professor of Anesthesiology at the University of California at San Diego (UCSD) and Associate Director of Research at the UCSD Center for Mindfulness. Dr. Zeidan’s Brain Mechanisms of Pain and Health Laboratory is focused on identifying the psychological, physiological, and neural mechanisms supporting mindfulness-based pain relief. Dr. Zeidan’s team also conducted the first placebo-controlled meditation-brain imaging study to show that meditation is more effective than and engages different brain regions from placebo analgesia. He was recently awarded the National Institutes of Health’s Mitchel Max Award in Research Excellence. His work is currently funded by the National Institutes of Health and the Mind and Life Institute.
2018-11-22
2018-11-08
Oscillations and motor control: Gait-related beta modulation and cortico-basal ganglia gamma coupling
Petra Fischer University of Oxford (UK)
+ more2018-10-11
Event-Driven Asynchronous Parallel Simulation of Multiscale Systems
Yuri Omelchenko Trinum Research, Inc., San Diego
+ more
Omelchenko Y.A. and H. Karimabadi, Self-adaptive time integration of flux-conservative equations with sources, J. Comp. Phys. 216, 179-194, 2006.
Omelchenko Y.A. and H. Karimabadi, A time-accurate explicit multi-scale technique for gas dynamics, J. Comp. Phys. 226, 282-300, 2007.
2018-09-19
Model-based Superresolution Imaging
Miguel Alvarez-Cabanillas UC San Diego
+ moreIn this talk I will review what we know about the distribution of epigenomic marks in brain cells and how it arises through development. I will then discuss our recent work analyzing DNA methylation in thousands of individual neurons in human and mouse frontal cortex. The resulting maps of the neuronal epigenome allowed us to identify and compare cell types across mammalian species, and to examine the regulatory networks that help determine their distinct identities. Finally, I will describe our efforts as part of the second phase of the BRAIN Intiative Cell Census Network to build a comprehensive epigenomic atlas of cell types in the mouse brain.
2018-06-07
Reading the neuronal DNA methylome: A computational perspective
Eran Mukamel UC San Diego
+ moreIn this talk I will review what we know about the distribution of epigenomic marks in brain cells and how it arises through development. I will then discuss our recent work analyzing DNA methylation in thousands of individual neurons in human and mouse frontal cortex. The resulting maps of the neuronal epigenome allowed us to identify and compare cell types across mammalian species, and to examine the regulatory networks that help determine their distinct identities. Finally, I will describe our efforts as part of the second phase of the BRAIN Intiative Cell Census Network to build a comprehensive epigenomic atlas of cell types in the mouse brain.
2018-05-31
How might brain-computer interfaces (BCIs) go mainstream
Brendan Allison UCSD
+ moreBCIs have become increasingly useful to some patient groups, and new technologies like dry, wireless electrodes have made BCIs more practical. In the last couple years, entities like Facebook and Neuralink have announced their interest in BCIs, which may begin a new era of "big BCI" R&D. But, BCIs have gained little adoption among mainstream users, and the "killer app" for BCIs remains unknown. What are some of the assumptions involved in extending BCIs for broader audiences? How have BCIs been useful for healthy (if not mainstream) users? What are some ethical and practical issues in mainstream BCIs?
2018-05-24
2018-05-10
Recognizing and Exploiting Multiple Views in Noisy Data
Virginia de Sa UCSD
+ moreEEG is known to be non-stationary and "noisy". In this talk I will show that some of this "noise" reflects response to perceived task performance and can actually be used to improve performance in an EEG-based motor-imagery brain-computer interface. I will also show a related situation where we found that the environmental sensitivity of automatically computed computer vision features impaired pain classification in new environments. I will show a simple solution we used to leverage some concurrently human-labeled data to improve performance of automatic pain classification in facial videos of postoperative children.
2018-04-26
Neuroscience in Applied Machine Learning
Benjamin Migliori Space and Naval Warfare Systems Center Pacific
+ moreMachine learning is reaching new levels of hype and focus as it begins to provide solutions to long-standing problems faced in the commercial sector. However, this progress comes at a cost: in the push for application-specific implementations, the diversity of methods is shrinking. In this talk, I present examples of machine learning systems that we have developed that are constrained to qualitatively or quantitatively mimic features of living neural systems. These examples, ranging from Gabor filters for radio-frequency data to topological analysis of neural spike trains, motivate the pursuit of less-common methods in machine learning such as feed-forward methods, spiking neural networks, and other biologically-inspired approaches.
2018-03-29
Cortical column organization revealed by information theory
Tatyana Sharpee The Salk Institute for Biological Studies
+ moreCortical tissue has a circuit motif termed the cortical column, which is thought to represent its basic computational unit but whose function remains unclear. In this talk I will present evidence that the cortical column performs computations necessary to decode incoming neural activity with minimal information loss. The results offer insights into the dimensionality of signals processed by each cortical column and produce quantitatively accurate estimates of the number of cortical columns across all of cortex. They also yield predictions for the expected increase in the diversity of neuronal types from rodents to primates, and how cortex size should change relative to subcortical parts of the brain. Overall, the information-theoretic view of the cortical decoder makes it possible to describe how different types of neurons work together within a cortical column.
2018-03-22
Continuous Learning, Sleep and Memory Consolidation
Maxim Bazenhov UCSD
+ moreMemory depends on three general processes: encoding, consolidation and retrieval. Although the vast majority of research has been devoted to understanding encoding and retrieval, recent novel approaches have been developed in both human and animal research to probe mechanisms of consolidation. A story is emerging in which important functions of consolidation occur during sleep and that specific features of sleep appear critical for successful retrieval across a range of memory domains, tasks, and species. In my talk, I will first discuss the limitations of the reinforcement learning for continuous multiple tasks learning in feedforward spiking network models. I will then present the results, obtained in computer simulations of the large-scale thalamocortical network models, to reveal the neural substrates of memory consolidation involving off-line replay during sleep. I will argue that spontaneous reactivation of the learned sequences during sleep spindles and slow waves of NREM sleep represents a key mechanism of memory consolidation and may help to avoid memory interference and allow multiple tasks learning without catastrophic forgetting.
2018-03-15
Toward brain state decoding and real-time tracking: Modeling nonstationarity in human electroencephalography (EEG)
Shawn Hsu Swartz Center for Computational Neuroscience (SCCN/UCSD)
+ moreAs the human brain performs cognitive functions or generates spontaneous mental processes within ever-changing, real-world environments, states of the brain are inevitably nonstationary. This calls for innovative approaches to obtain objective and quantitative insights into hidden cognitive and mental states and study the dynamics of brain states that give rise to behaviors and mental disorders. Despite electroencephalography (EEG) offering a noninvasive, portable, real-time measurement of brain activity, an urgent need remains for computational tools to effectively decode brain states from continuous, unlabeled EEG data, to quantitatively assess state changes, and to provide neuroscientific insights.
In this talk, I will present three computational approaches for quantitative assessment of brain-state dynamics by modeling multichannel, nonstationary EEG data at the level of functional brain sources. These include a hypothesis-driven approach which uses independent component analysis (ICA) to model distinct source activities under different brain states, a data-driven approach (Adaptive Mixture ICA) for exploring nonstationary dynamics of continuous and unlabeled data, and the Online Recursive ICA approach for adaptive tracking of the nonstationary sources that underlie continuous state changes. I will present the results of applying these approaches to characterizing EEG dynamics during sleep for automatic staging, assessing transitions between alert and drowsy states in a simulated driving experiment, and exploring mental state changes during a guided-imagery hypnotherapy. Finally, I will discuss the challenges toward building a real-time brain monitoring system and some of the strategies and ongoing efforts to address those problems.
2018-03-08
Automated Mega-Analysis of Millions of Event Instances from Multiple EEG Studies with Different Paradigms
Nima Bigdely Shamlo Intheon (formerly Qusp)
+ moreSignificant achievements have been made in the fMRI field by pooling statistical results from multiple studies (meta-analysis). More recently, fMRI standardization efforts have focused on enabling the combination of raw fMRI data across studies (mega-analysis), with the hope of achieving more detailed insights. However, it has not been clear if such analyses in the EEG field are possible or equally fruitful. In this talk we present the results of a large-scale mega-analysis using 12 studies from four institutions representing several different experimental paradigms and 2.5 million event instances. Our results show that EEG mega-analysis is possible and can provide unique insights unavailable in single studies. Such large-scale analysis is predicated on an effective automated processing pipeline (which we call LARG). LARG assumes that data has been standardized by mapping events into a common cognitive space and organized into a form suitable for automated processing. Standardized EEG is subjected to a fully-automated pipeline that reduces line noise, interpolates noisy channels, performs robust referencing, removes eye-activity, and further identifies outlier signals. LARG applies temporal overlap regression to eliminate confounds caused by adjacent events instances and extracts time and time-frequency EEG features (regressed ERPs and ERSPs). LARG uses second-level linear regression to separate effects of different cognitive aspects on these features across all studies. We demonstrate that Hierarchical Event Descriptor (HED) tags capture statistically significant cognitive aspects of EEG common across multiple recordings, subjects, studies, paradigms, headset configurations, and institutions. Using ICA-based dipolar sources, we also observe consistent differences in overall frequency baseline amplitudes across brain areas. For example, we observe higher alpha in posterior vs anterior regions and higher theta in anterior cingulate. This work demonstrates that EEG mega-analysis can enable investigations of brain dynamics in a more generalized fashion, opening the door for both expanded EEG mega-analysis as well as large-scale EEG meta-analysis.
2018-02-22
Discrete-time modeling of network dynamics of spiking neurons
Nikolai Rulkov BioCircuit Institute (UCSD)
+ moreDynamics of spiking neurons can be modeled using a simple set of equations computed at discrete times separated by a large time step. The development of such models is motivated by the need for efficient simulation and analysis of neuron activity in large-scale networks. It also enables real-time simulations of networks of neurons using low-power computers implemented with off-the-shelf microprocessor, DSP or FPGA ICs. We consider dynamics of nonlinear maps, supporting the main elements of such models design, and illustrate how features of neuron activity and synapses affect the network dynamics of spiking neurons.
2018-02-15
Translational Neurotechnology: From Neural Signals to State Decoding to Transformative Applications
Tim Mullen Intheon (formerly Qusp)
+ moreIn this talk, I will discuss collaborative efforts at Intheon to reduce the barrier to translation of neurotechnology beyond the laboratory and into ubiquitous, real-world applications that can positively impact people’s lives. Throughout the talk, I will address several opportunities for translational neurotechnology and brain-computer interfacing “in the wild.” These include 1) sensors and systems for pervasive measurement and interpretation of brain and body signals during everyday activities; 2) signal processing and machine learning for robust tracking and prediction of brain dynamics, cognitive state, and behavior in noisy environments; 3) standardized frameworks and methods for large-scale 'mega analysis' of EEG and other data (jointly analyzing millions of events across dozens of studies), facilitating generalizable knowledge discovery across diverse cognitive paradigms, individuals, and sensor hardware; 4) emerging industry standards for multi-modal (neuro)physiological data storage, organization, and interoperability; and 5) cloud-scalable technologies to catalyze industry applications which build on validated scientific research and methods, to enable innovative research, and to facilitate the widespread integration of neurotechnology into everyday life.
I will also be announcing and providing a brief walkthrough of the release of the new Academic Edition of our NeuroPype software suite (neuropype.io), available for free to the scientific research community. The NeuroPype suite empowers researchers to easily and quickly create and run powerful pipeline workflows for multi-modal biosignal processing, brain-computer interfacing and machine learning, neuroimaging and brain connectivity analysis, closed-loop neurofeedback, and much more. I will be available after the talk for further discussions and hands-on demonstrations for anyone interested.
2018-01-25
A computationally-assisted approach to electrophysiology: improving neural recordings and stimulation using biophysical simulations
Alessio Buccino University of Oslo/UCSD
+ moreIn the latest years, the international neuroscience community has made a massive effort to make realistic neuronal models from experimental data (e.g. Blue Brian Project, Allen Brain Institute). In this talk, I will present some approaches to use these detailed cell models to improve the current electrophysiology pipeline, in a computationally-assisted electrophysiology. For example, simulations can provide ground truth data that can be used to train deep learning algorithm to extract information from recordings, to test spike sorting algorithms, or to optimize electrical stimulation patterns. I will also cover the some of the limitations of the current modeling techniques and how to overcome them.
2018-01-18
Neuromorphic Event-Driven Multi-Scale Synaptic Connectivity and Plasticity
Siddharth Joshi UC San Diego
+ moreNeural computation and communication in the brain are partitioned into the grey matter of dense local synaptic connectivity in tightly knit neuronal networks, and the white matter of sparse long-range connectivity over axonal fiber bundles across distant brain regions. We demonstrate the trade-off between flexibility in representation, and memory illustrating the spectrum of available options in this regard. We present results for implementing local learning on large scale neuromorphic synaptic arrays. We present bounds on communication between communicating arrays, introducing some cost functions for both area and energy optimization.
2017-11-30
Toward understanding brain-behavior dynamics of social interaction in children and adults
Gedeon Deák UC San Diego
+ moreThe "social brain," a hot topic for over a decade, is perhaps the most complex, contentious, and confusing topics within the cognitive and neurosciences. The number of claims about the nature of the developing social brain (i.e., the propensity for juvenile humans to acquire acceptable mature social phenotypes) wildly outstrips the data available to falsify those claims. The first part of this talk will summarize some current models and claims about the social brain and its development, and explain the most serious problem facing researchers: almost all research on "the social brain" is non-social. The second part of the talk will sketch recent social research addressing two major questions: (1) how cortical network states emerge to encode abstract representations of deliberate human actions; and (2) how cortical networks generate predictive representations of the outcomes of human actions (i.e., reward processing in social interactions). I will give examples from our preliminary studies using turn-taking games. Finally, I will describe a paradigm for future research: high density physiological and behavioral data collected in naturalistic social interactions.
2017-11-09
Exploring neural circuit based diagnostics and treatments in schizophrenia
Fiza Singh UCSD Medical Center
+ moreCurrently, clinical diagnosis of psychiatric disorders relies primarily on patient report and, at times, observed behavior. While historically necessary, the specificity of such an approach is quite limited, and in direct conflict with current models of brain function. In an effort to develop neuroscientifically-validated psychiatric diagnostic criteria (e.g., biomarkers) and treatment targets (e.g., network abnormalities), the National Institutes of Mental Health have proposed the Research Domain Criteria (RDoC), which aims to identify how 5 core brain processes (Negative Valence Systems, Positive Valence Systems, Cognitive Systems, Systems for Social Processes and Arousal/Modulatory Systems) are integrated neurobiologically (from neurons to circuits to behavior) and disturbed across psychiatric disorders.
Electroencephalography (EEG) is a well-tolerated, noninvasive, widely-available, cost-effective method for directly assessing and modulating brain activity. Furthermore, recent advances in EEG acquisition, analysis and neurofeedback allow us to assess and manipulate localized brain processes with greater specificity.
We are interested in developing novel RDoC-informed EEG methods for assessing and modulating brain abnormalities in patients with schizophrenia. In this talk, I will discuss results from our projects applying this approach to assessing Social Processes in patients with early psychosis (e.g., mu rhythm abnormalities), modulating schizophrenia-related mu rhythm abnormalities using oxytocin (a pro-social neuro-hormone) and improving working memory in patients with schizophrenia using gamma coherence EEG neurofeedback.
2017-10-26
Efficient hardware implementation of spiking networks
Venkat Rangan Qualcomm
+ moreImplementing spiking networks with learning has been a subject of active interest within academia and industry. This talk will go over the approaches in the following granted patents that discuss how to put together a spiking network simulator that efficiently uses current hardware. Topics covered will include efficient DRAM usage, communications required to hold a distributed system together and STDP learning.
9542643: Efficient hardware implementation of spiking networks
9373074: Method and apparatus for time management and scheduling for synchronous processing on a cluster of processing nodes
9330355: Computed synapses for neuromorphic systems
8606732: Methods and systems for reward-modulated spike-timing-dependent-plasticity
2017-10-19
Mobile Brain/Body imaging of dual-task walking in aging
Philip de Sanctis New York Einstein College
+ moreMobility stress tests such as dual-task walking are particularly suited to unmask subtle gait changes in adults aged 65 and older. This is important, as quantitative gait markers are independent predictors of negative outcomes such as falls and cognitive decline. Further, structural neuroimaging highlights cortical contributions by linking gait variability in aging to atrophy in medial areas important for lower limb coordination and balance. Advancing our knowledge of the electro-cortical underpinnings of such complex cognitive-motor behavior will provide relevant clinical insight. Hence, our goal is to further EEG-based Mobile Brain/Body Imaging (MoBI) as a clinical research tool to determine changes in cognitive, sensory, and motor coupling with advanced age and neurological disorders such as multiple sclerosis. We employ a 3D infra-red camera system to monitor gait and posture during ambulation while high-density electrophysiology is simultaneously recorded. Concurrently, we vary sensory load by manipulating full-field optical flow stimulation as well as task-load, as participants will also perform cognitive tasks while walking in this environment. I will provide an overview of our work addressing the test/retest reliability of EEG signals while walking, the use of event-related potentials to map age differences, and power spectral density of localized ICA sources to assess cortical network activity during dual-task walking. In addition, I will present ongoing efforts to determine electro-cortical signals associated with increased gait variability in aging. We believe MoBI will provide new insights to enhance the mobility and quality of life of older individuals.
2017-10-12
Gradient descent for spiking neural networks
Ben Dongsung Huh The Salk Institute for Biological Studies
+ moreMuch of studies on neural computation are based on network models of static neurons that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking networks. Here, we present a gradient descent method for optimizing spiking network models by introducing a differentiable formulation of spiking networks and deriving the exact gradient calculation. For demonstration, we trained recurrent spiking networks on two dynamic tasks: one that requires optimizing fast (~ millisecond) spike-based interactions for efficient encoding of information, and a delayed memory XOR task over extended duration (~ second). The results show that our method indeed optimizes the spiking network dynamics on the time scale of individual spikes as well as behavioral time scales. In conclusion, our result offers a general purpose supervised learning algorithm for spiking neural networks, thus advancing further investigations on spike-based computation.
2017-10-05
Multiscale modeling of neurodegenerative disease dynamics
Sharmila Venugopal UCLA
+ moreAmyotrophic Lateral Sclerosis (ALS), also known as Lou Gehrig's disease, is an adult-onset progressive neurodegenerative motor neuron disease in which, a significant proportion of brain and spinal motor neurons degenerate, leading to inevitable fatality. Interestingly, not all motor neurons are equally vulnerable to degeneration; indeed a select population is resistant to the disease, and, even within a motor pool some motor neurons survive till the end stage. My research seeks to understand the neurobiological basis of this selective disease vulnerability using multidisciplinary approaches, to aid early biomarker and target discovery. The key dynamic changes in the neurobiological substrates involved in the disease process are integrated to develop predictive computational models. Currently, our neurobiological investigation is focusing on identifying cellular and molecular alterations in brainstem motor neurons comparing disease-vulnerable and resistant populations. Our working hypothesis is that disease vulnerability is a consequence of abnormal intrinsic properties and synaptic interactions that significantly modify the circuit homeostasis and contribute to motor neuron vulnerability. In conjunction with experiments, we are developing multiscale models of realistic neural networks as an attractive choice to test our circuit-based hypothesis. We have also begun utilizing closed-loop dynamic-clamp approach to test model predictions. Our recent results have provided crucial insight into early disease mechanisms in ALS. New research directions seek to establish a multidisciplinary collaboration amongst basic and clinical scientists, to begin a systematic biomarker search using patient-derived induced pluripotent stem-cell based motor neurons, systems biology, computational modeling, cellular electrophysiology and neural engineering strategies.
2017-07-06
Traveling waves across multiple scales synchronized to the rhythmic production of speech
Joaquin Rapela UC San Diego
+ moreThe recent introduction of multichannel recording techniques has made it possible to examine neural dynamics of single cortical areas. Using these methods traveling waves (TWs) have been reported in anesthetized (e.g., Benucci et al., 2007) and awake (e.g., Rubino et al., 2006) non-human animals, and more recently in humans during sleep (Muller et al., 2016). In this talk I will describe yet unreported TWs from a human subject rhythmically producing consonant-vowel syllables (CVSs) while we perform high-density electrocorticography recordings of his brain activity.
I will show that these TWs are precisely synchronized (in dynamical systems terms) to produced CVSs. This synchronization is observed in both TWs at the frequency of CVS production and in TWs at higher harmonics (c.f., Arnold tongues), suggesting that the observed TWs are not a trivial consequence of the rhythmic production of CVSs.
Our recordings show a strong coupling between phases at the slow frequency of CVS production and amplitudes in the high-gamma range. This coupling displays a peculiar spatial organization, which generates TWs of coupled high-gamma amplitude. That is, our recordings contain TWs at multiple scales: TWs in voltages filtered around the slow frequency of speech production and TWs in coupled high-gamma amplitude. I will demonstrate extended TWs of the former type traveling from primary to premotor cortex and TWs of the later type traveling along the same path but in reverse direction. These pairs of TWs might be a neural mechanism for the coordination between the control of vocal articulators in the premotor cortex and the perception of self-produced speech in the primary auditory cortex.
From an engineering standpoint, could these TWs be useful? I will present preliminary evidence on the consistency of TWs across repetitions of the same CVS, suggesting that TWs could be used for decoding intended speech from cortical activity.
Preprints related to this talk can be found in Rapela (2016) and in Rapela (2017).
References
Andrea Benucci, Robert A Frazor, and Matteo Carandini. Standing waves and traveling waves distinguish two circuits in visual cortex. Neuron, 55(1):103–117, 2007.
Lyle Muller, Giovanni Piantoni, Dominik Koller, Sydney S Cash, Eric Halgren, and Terrence J Sejnowski. Rotating waves during human sleep spindles organize global patterns of activity that repeat precisely through the night. eLife, 5:e17267, 2016.
Joaquín Rapela. Rhythmic production of consonant-vowel syllables synchronizes traveling waves in speech- processing brain regions, 2017. URL https://arxiv.org/abs/1705.01615.
Joaquı́n Rapela. Entrainment of traveling waves to rhythmic motor acts, 2016. URL http://arxiv.org/abs/1606.02372.
Doug Rubino, Kay A. Robbins, and Nicholas G. Hatsopoulos. Propagating waves mediate information transfer in the motor cortex. Nature neuroscience, 9(12):1549–1557, 2006.
2017-06-22
How the brain got language: Challenges for computational cognitive neuroscience
Michael Arbib UC San Diego
+ moreThe Mirror System Hypothesis (MSH) for how the brain got language charts a course from a mirror system for manual action in LCA-m (Last Common Ancestor of humans and monkeys; informed by data on present-days monkeys) via simple imitation and manual gesture in LCA-c (informed by data on chimpanzees and other great apes) and thence via complex imitation, pantomime, protosign and protospeech to a "language-ready brain" in Homo sapiens, setting the stage for cultural evolution to yield the emergence of language. However, rather than assess the data pro and con this account of how the brain got language, the focus of the talk will be on current and possible future models in computational cognitive neuroscience that may aid the quest to refine MSH or replace it with something better.
2017-06-22
Neural Evidence of the Cerebellum as a State Predictor
Hirokazu Tanaka Japan Advanced Institute of Science and Technology
+ moreThis talk provides neural evidence that the cerebellar circuit can predict future inputs from present outputs, a hallmark of an internal forward model. Evidence from clinical observations and psychophysical experiments indicates that impairments of the cerebellum lead to motor ataxia characterized by incoordination and dysmetria in multi-joint movements. Still, the precise mechanisms by which the cerebellum coordinates body movements are not yet understood. Recent computational studies hypothesize that the cerebellum performs state prediction known as a forward model. I analyzed firing rates of mossy fibers (inputs to the cerebellar cortex), Purkinje cells (output from the cerebellar cortex to dentate nucleus), and dentate nucleus cells (cerebellar output), all recorded from a monkey performing wrist tracking movements. To test the forward-model hypothesis, I then investigated if the current outputs of the cerebellum (dentate cells) could predict the future inputs of the cerebellum (mossy fibers). The firing rates of mossy fibers at time t+t1 were well reconstructed from as a weighted sum of firing rates of dentate cells at time t, thereby proving that the dentate activities contained predictive information about the future inputs. The linear equations derived from the firing rates resembled those of a predictor known as Kalman filter composed of prediction and filtering steps. This analogy leads to a speculation that the Purkinje and the dentate cells perform the prediction and the filtering steps, respectively. In summary, my analysis of cerebellar activities supports the forward-model hypothesis of the cerebellum.
2017-06-15
Sparse Coding, Dimensionality Reduction, and Synaptic Plasticity: Evolving and Validating Biologically Realistic Models
Jeff Krichmar UC Irvine
+ moreWe have developed novel methodologies for the evolution and evaluation of spiking neural networks. This series of studies involved the use of GPU-accelerated, parallelized evolutionary algorithms. The project was intended to aid collaboration efforts between theoretical and experimental neuroscientists, who often spend tremendous time and money developing experiments that may not provide useful results. It was also intended to develop a veridical way of modeling neural systems by matching experimentally observed neurophysiological data. The networks evolve such that higher-order features of the region, such as functional behavior and population coding, emerge by virtue of replicated firing patterns. We developed an automated tuning framework and applied it to a case study using a dataset recorded from rat retrosplenial cortex (RSC). The framework successfully takes as input the recorded behavioral metrics associated with neuronal firing patterns which are encoded by idealized input neurons and evolves spike timing dependent plasticity parameters to create a spiking neural network that matches the experimentally observed data. Using the framework, novel experimental designs can be simulated and model response patterns can be recorded. By simulating experiments such as lesioning of the network and manipulation of behavioral inputs, new predictions can be made about the function of the brain region, and new experiments to probe that function can be designed without expending unnecessary time and effort on the part of experimentalists. To show how this might work, we link spike-timing dependent plasticity to dimensionality reduction in the brain by applying a statistical algorithm known as nonnegative matrix factorization (NMF) to the same dataset. We show that similar results, and a similar model of RSC functionality, can be achieved simply through nonnegative and parts-based dimensionality reduction, and propose that nonnegative sparse coding may be a canonical computation performed by plasticity rules in the brain to handle high-dimensional input spaces.
2017-06-01
New Generations of ANNs for Conscious and Creative Robots
Vladimir Gontar Ben-Gurion University of the Negev
+ moreWe are presenting a new mathematical model for multiple artificial neural networks (ANN) based on a physicochemical principles and laws of nature. Initially this mathematical model was formulated for living and thinking systems dynamics and called discrete chaotic biochemical reactions dynamics (BRDCD) [1].
In this work we will demonstrate that the BRDCD within the individual neurons are accompanied and controlled by an “information exchange” within and between the brain neurons composing multiple neural network. We intend to show that BRDCD of the multiple neural networks responsible for a brain’s various cognitive functions. Both the qualitative and quantitative meaning of “information” and “information exchange” between the neurons and different neural networks have been formulated within its relation to a neuron’s chaotic states and formally introduced into the basic mathematical equations [2]. As will be shown in this work, proposed ANN not only exploiting extension of the fundamental physicochemical principles for answering questions about living and thinking systems driving forces, but also enable to simulate specific properties of living systems and brain such as “self-organization” and “self-synchronization", emergence and support of living states and "thoughts" among the others specific features. These all are resulted from the emergence of “phenomenological” states in the form of complex patterns (discrete time – space distributions of biochemical constituents composing brain neurons within the neural networks) which we associate with brain consciousness, cognition and creativity. Proposed ANN generates practically unlimited variety of discrete time and space creative patterns which are controlled by the continuous parameters of the mathematical models. This fact provided us with the confidence and prove that after specific learning process we can construct the proper ANN configuration and mathematical model for the desired conscious, creative and intelligence artificial system behavior directed to the various problems rational solution.
Results of numerical simulations will be presented in a form of creative 2D and 3D dynamical discrete time-space distributed patterns. Application of the artificial brain system for autonomous conscious creative & rational robot path planning will be presented and discussed in this talk.
[1] V. Gontar, Entropy as a driving force for complex and living systems dynamics, Chaos, Solitons & Fractals, 11, 2000, pp.231-236
[2] V. Gontar, Artificial brain systems based on neural network discrete chaotic dynamics. Toward the development of conscious and rational robots, in book, R. Mittu, D.Sofge, A. Wagner(eds), Robust Intelligence and Trust in Autonomous Systems, chapter 6, Springer, 2016, pp.97-115.
2017-05-18
New Generations of ANNs for Conscious and Creative Robots
Todd Hylton, UC San Diego
+ moreThermodynamic concepts intimately pervade all of science and engineering, yet in computing today they appear only as an “engineering constraint” to an overarching computing and informational paradigm. In this talk I examine the challenges in the current computing paradigm and propose a radical rethinking of computing in which thermodynamics plays the central role. I will also connect ideas in thermodynamics to those in machine learning and biology.
2017-04-27
Nonlinear Dynamics of Human Cognition
Mikhail Rabinovich BioCircuits Institute, UC San Diego
+ moreIn this talk we discuss a novel paradigm for the mathematical description of mental functions such as consciousness, creativity, decision making and prediction of the future based on the past. Such cognitive functions are described in the framework of canonical nonlinear dynamical models that form joint global hierarchical networks. Sub-networks cooperate and compete with each other by inhibition. The suggested approach uses heteroclinic dynamics to represent transitivity and sequential interaction of different cognitive modalities at all levels of network hierarchy. We build a model of global network dynamics based on a set of kinetic ecological equations describing the interaction with emotion at each level of the hierarchy. This makes the model applicable for the description and understanding of perception, creativity and other complex cognitive processes. We discuss the creativity phenomenon, for example, in a joint "human-robot mind" considering the approximation in which the artificial partner is responsible for the binding and retrieving of multimodal perception information. The formation of chunks and the creation of working memory is a joint effort – human-robot mind. The human mind is responsible for the evaluation of the information in working memory. Creativity is estimated by values of positive Lyapunov exponents. As an example, we discuss joint human-robot musical improvisation, which can be generalized for many applications, in particular, in the context of artificial intelligence applications and also to address several psychiatric disorders.
2017-03-23
Reduced-memory deep residual networks for image classification using stochastic quantization
Mark McDonnell University of South Australia
+ moreMotivated by the goal of enabling more efficient learning in deep neural networks, we describe a method for modifying the backpropagation algorithm that significantly reduces the memory usage during the training phase. The method is inspired by recent work on seeking neurobiological correlates of backpropagation-based learning that calculate gradients imprecisely. Specifically, our method introduces stochastic binarization of hidden-unit activations for use in the backward pass, after they are no longer used in the forward pass. We show that without stochastic binarization the method is far less effective. We trained wide residual networks with 20 weight layers on the CIFAR-10 and CIFAR-100 image classification benchmarks, achieving error rates of 5.43\%, 23.01\% respectively. These error rates compare with 4.53\% and 20.51\% on the same network trained without stochastic binarization. Moreover, we also investigated learning binary-weights in deep residual networks and demonstrate, for the first time, that Reduced-memory deep residual networks for image classification using stochastic quantizationnetworks using binary weights at test time can perform equally to full-precision networks on CIFAR-10, with both achieving ~4.5%. On Imagenet, we are still experimenting, but to date our binary-weights method at test time had a top-5 error rate of 20%.
2017-03-09
Neuromorphic Deep Learning Machines
Emre Neftci UC Irvine
+ moreAn ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory, and precise operations that are difficult to realize in neuromorphic hardware.
Remarkably, recent work showed that exact backpropagated weights are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. The rule requires only one addition and two comparisons for each synaptic weight using a two-compartment leaky Integrate & Fire (I&F) neuron, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving nearly identical classification accuracies on permutation invariant datasets compared to artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
2017-03-02
Towards Autonomous Surgery Delivered by Expert Robots
Michael Yip UC San Diego
+ moreSurgical robotics offers an unprecedented ability to place and dexterously control small robotic instruments, immersive stereo imaging and other sensing modalities deep within inaccessible locations in the body. This presents major opportunities to in the medical domain to treat diseases (e.g. cardiac arrhythmia, lung cancer, colon cancer) in a minimally invasive fashion beyond. Yet, as these devices get smaller, more flexible and more mechanically complex, we are presented with a new challenge: do we rely on the doctor to sort out the challenging control of the devices while simultaneously processing the multi-modal biosignals from onboard sensing? Or do we off-load the low-level control of the surgery from human teleoperation onto a semi-autonomous or fully-autonomous framework? I will discuss our work in developing robot-assisted surgeries that analyze a multimodal spectrum of sensory information, physics models, and imaging information in real-time to optimally plan and perform semi-autonomous surgery. This includes real-time learning-based controllers for automating catheter and endoscopic robots within difficult anatomy, modular snake-like devices for efficient locomotion in difficult environments, visual computation methods for image-guided robotics, and robot intelligence for robot-human teams. Finally, I will discuss directions we aim to pursue in reinforcement learning such that with limited self-training, our robot-assistive devices learn to become expert robot surgeons.
2016-12-01
Design of a heterogeneous neural network accelerator ASIC
Douglas A. Palmer KnuEdge Inc.
+ moreIn an effort to accelerate large-scale, sparse, heterogeneous neural network modeling a dedicated ASIC was designed, produced, and tested. The resulting device, a joint effort between Calit2 and KnuEdge Inc., is a router based, cloud-on-a-chip, 256-core, MPMD (Multiple-Program Multiple Data), machine that scales to 512K devices. Latency between devices is less than 400 ns. And random addressing benchmarks (GUPs) exceed 1 billion. Performance testing has shown that it is many times faster than existing CPU and GPU architectures for scatter/gather operations such as K-means clustering, FFTs, and heterogeneous sparse neural network models.
Bio:
Dr. Palmer specializes in unconventional signal processing. He holds over a dozen U.S. patents and has founded or participated in the startup of many companies. He spent 8 years at the Stanford Linear Accelerator and then went on at Linkabit Corp, Western Research Corporation, became head of R&D Director at Hecht-Nielsen Neurcomputer, and then moved on to ThermoTrex, a subsidiary of ThermoElectron. In 1998 Dr. Palmer cofounded Path1 Network Technologies where he developed the world’s first video over IP systems. In 2002 he joined Calit2 at UCSD. He has been working with KnuEdge Inc. since 2006. Dr. Palmer received his MPhil and Ph.D. in High Energy Physics from Yale University after earning his B.A. in physics from UCSD Revelle College.
2016-11-17
Multistable Winner-Takes-All neural networks with NMDARs and feedback inhibition
Patrick Shoemaker Computational Science Research Center, SDSU
+ moreAs a result of magnesium blockade, the macroscopic current-voltage relation of ion channels associated with the NMDA class of glutamatergic receptors is nonmonotonic. In conjunction with other membrane conductances, this feature can give rise to bi- and multi-stable dynamical regimes in neurons that have NMDA receptors. I describe a very simple neuronal network that displays winner-takes-all behavior as a consequence of this property. I first discuss the properties of this network under stationary or quasistatic conditions, and then proceed to consider dynamics, in particular network stability.
Bio:
Pat Shoemaker received the Ph.D. degree in Bioengineering from UCSD in 1984. He has a longstanding interest in neural information processing and bio-inspired systems. From 1984 to 1999 he was with the Space and Naval Warfare Systems Center, where he worked among other things on hardware implementations of artificial neural networks. From 1999 to 2015 he was with Tanner Research, Inc., where he focused on bio-inspired systems and developed an growing interest in natural neural networks. Since the early 2000's he has collaborated with several neurobiologists on studies of visual processing in insects. He is currently a Research Associate Professor at the Computational Science Research Center at SDSU.
2016-11-03
Rhythm in speech, music and movement: towards a common analytical framework for temporal structure
Andrea Ravignani Vrije Universiteit Brussel
+ moreBehavioural research on the temporal properties of speech, music and movement often requires quantification of rhythmic structure. However, different research traditions investigating rhythmic behaviours have different methodologies, hindering comparability. Here, I present a suite of analytical tools to quantify rhythmic patterns across behaviours and domains. In particular, I focus on meaningful interpretation of simple techniques borrowed across disciplines, such as the normalised pairwise variability index, phase space plots, auto-regressive time series, and Granger causality. For each technique, I show its application to speech and music corpora, human psychological experiments, or chimpanzee behaviour.
2016-10-20
New processor architecture for machine learning
Amir Khosrowshahi Nervana, https://www.nervanasys.com
+ moreNervana is a San Diego-based startup providing a cloud platform for deep learning as a service. Deep learning is now state-of-the-art in a wide variety of domains including speech, images, and text, and is being quickly adopted in industry. Nervana's core technology is a novel distributed processor architecture for deep learning which aims to improve speed, scalability, and efficiency by an order of magnitude over the current state-of-the-art. I will present our work in the context of a variety of various promising efforts to build new hardware for advancing computation.
Bio:
Amir Khosrowshahi is co-founder and CTO of Nervana. He studied computational neuroscience at Berkeley and physics and math at Harvard. Nervana was recently acquired by Intel where Amir is now VP of machine learning solutions in its data center group.
2016-10-06
Rhythmic activity drives efficient search for maximally consistent states in neural networks and neuromorphic chips
Hesham Mostafa Integrated System Neuroengineering Lab, UC San Diego
+ moreHumans and animals display a remarkable ability for constructing a rich and consistent interpretation of the surrounding environment based on imperfect and incomplete sensory inputs. This is a challenging problem that can be formulated as finding a configuration of variables that maximally satisfies a set of constraints encoding a model of the environment, while being consistent with the observed sensory input. We show that this problem can be efficiently solved using simple coupled attractor networks if these networks include a basic model of Gamma-band oscillations. By dynamically modulating the effective network connectivity, neuronal rhythms allow simple networks to collectively and efficiently search for maximally consistent configurations. We show that these rhythms give rise to network behavior that is functionally very similar to that of stochastic networks, providing an alternative framework for modeling probabilistic reasoning in the brain.
Since the oscillatory networks can efficiently solve difficult constraint satisfaction problems (CSPs), we developed a neuromorphic VLSI chip that captures the salient features of these networks and used the chip to solve Boolean satisfiability (SAT) and graph coloring problems. Empirically, we have shown that in the case of SAT problems, the search implemented by interacting oscillatory elements is as efficient as state of the art stochastic search algorithms. Our results highlight the benefits and pitfalls involved in taking neural dynamics in the brain as a source of inspiration for building physically realizable, non von-Neumann computing models, and they establish an unexpected and fundamental link between CSPs and the behavior of simple oscillatory systems.
2016-09-22
Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network
Filip Piekniewski Brain Corp.
+ more2016-09-06
2016-05-05
Ulysses Bernardet Simon Fraser University, Surrey https://sites.google.com/site/bernuly
+ moreAt each moment in time an animal is faced with a myriad of behavioral options; why does an animal initiate and persist in certain behaviors as opposed to others? Thematically this question of action selection and behavior regulation stands at the core of much of my past and present research. I will begin by presenting work on systems theory and neurobiology based models of social motivation and behavior regulation in insects, respectively. This will be followed by presenting current work that uses autonomous virtual characters to develop and test psychologically grounded models of nonverbal behavior. These models include the regulation of spatial behavior in a social setting, and work on a reflexive behavior architecture for virtual humans.
2016-04-21
Our Brain Oscillations Follow Our Motor Rhythms
Joaquin Rapela Swartz Center for Computational Neuroscience, INC, UC San Diego http://sccn.ucsd.edu
+ moreA remarkable early observation on brain dynamics (Adrian and Mathews, 1934) is that when humans are exposed to rhythmic stimulation their brain oscillations can follow this rhythm. More recently, it has been found that attention can adjust the way in which oscillations follow periodic stimulation, in such a way that neurons are in a state of maximal excitability when an attended stimulus is expected to occur (Lakatos et al., 2008). Using what today are the neural recordings with highest spatial resolution, directly from the cortical surface of humans (ECoG grid with 4 mm interelectrode separation; Bouchard et al., 2013), covering most speech production and perception brain regions, I will describe a recent finding on this fascinating field of brain rhythms: when we speak in a rhythmic fashion, our brain oscillations follow our speech rhythm. Evidence for this finding comes from the alignment of the phases of brain oscillations at behaviorally relevant time points (highlighting the role of phase coherence in understanding the neural code; Makeig et al. 2002), from the coupling between low-frequency brain oscillations related to behavior and high-frequency oscillations related to neural spiking (phase-amplitude coupling; Canolty et al, 2006), and from the detection of traveling waves confined to the brain region that controls the vocal articulators (Rubino et al, 2006). This research is still on early stages, but it is worth sharing with the UCSD community.
Adrian ED, Matthews BH. The interpretation of potential waves in the cortex. J Physiol. 1934 Jul 31;81(4):440-71.
Bouchard KE, Mesgarani N, Johnson K, Chang EF. Functional organization of human sensorimotor cortex for speech articulation. Nature. 2013 Mar 21;495(7441):327-32.
Canolty RT, Edwards E, Dalal SS, Soltani M, Nagarajan SS, Kirsch HE, Berger MS, Barbaro NM, Knight RT. High gamma power is phase-locked to theta oscillations in human neocortex. Science. 2006 Sep 15;313(5793):1626-8.
Lakatos P, Karmos G, Mehta AD, Ulbert I, Schroeder CE. Entrainment of neuronal oscillations as a mechanism of attentional selection. Science. 2008 Apr 4;320(5872):110-3.
Makeig S, Westerfield M, Jung TP, Enghoff S, Townsend J, Courchesne E, Sejnowski TJ. Dynamic brain sources of visual evoked responses. Science. 2002 Jan 25;295(5555):690-4.
Rubino D, Robbins KA, Hatsopoulos NG. Propagating waves mediate information transfer in the motor cortex. Nat Neurosci. 2006 Dec;9(12):1549-57.
2016-04-14
Multichannel recordings in neuroscience: methods for spatiotemporal dynamics
Lyle Muller
+ moreMultichannel recording techniques in neuroscience have recently come of age. From dense multielectrode arrays to large-scale optical imaging techniques, novel recording technologies can now capture the fast dynamics of active cortical circuits in vivo. These technologies present the opportunity to probe the spatiotemporal dynamics of cortical circuits across a wide range of network states, from active sensation to the internally generated oscillations of sleep.
Concomitant with the rise of these technologies, however, is the need for novel and precise computational methods that can see through recording noise and capture the full complexity of cortical activity states. In recent work, we have introduced a non-parametric, phase-based method for detecting traveling waves in noisy multichannel data. This method requires no spatial smoothing, thus minimizing signal distortion and controlling false detections. Analysis of voltage-sensitive dye (VSD) imaging data from the visual cortex of the monkey with this method revealed that the population response to a small visual stimulus travels as a wave across the cortex, with a specific trial invariance. Extending this computational approach to more general spatiotemporal forms, we have now begun to study the large-scale structure of oscillations in electrocorticogram (ECoG) recordings of human cortex during sleep, where we find that a well-known sleep oscillation exhibits a specific, robust spatiotemporal pattern.
2016-04-07
Towards Neuroadaptive Technology: Symmetrical Human‐Computer Interaction based on a cognitive user model generated by automatically probing the operator's mind
Thorsten O. Zander Team PhyPA, Biological Psychology and Neuroergonomics, TU Berlin, Germany
+ moreToday's human‐machine interaction is asymmetrical in the sense that (a) the operator has access to any and all details concerning the machine's internal state, while the machine only has access to the few commands explicitly communicated to it by the human, and (b) while the human user is capable of dealing with and working around errors and inconsistencies in the communication, the machine is not. With increasingly powerful machines this asymmetry has grown, but our interaction techniques have remained the same, presenting a clear communication bottleneck: users must still translate their high level concepts into machine‐mandated sequences of explicit commands, and only then does a machine act. During such asymmetrical interaction the human brain is continuously and automatically processing information concerning its internal and external context, including the environment the human is in and the events happening there. I will discuss how this information could be made available in real time and how it could be interpreted automatically by the machine to generate a model of its operator's cognition. This model then can serve as a predictor to estimate the operator's intentions, situational interpretations and emotions, enabling the machine to adapt to them. Such adaptations can even replace standard input, without any form of explicit communication from the operator. I will illustrate this approach by several brief examples. The above‐mentioned cognitive model can be refined continuously by giving agency to the technological system to probe its operator's mind for additional information. It could deliberately and iteratively elicit, and subsequently detect and decode cognitive responses to selected stimuli in a goal‐directed fashion. Effectively, the machine can pose a question directly to a person's brain and immediately receive an answer, potentially even without the person being aware of this happening. This cognitive probing allows for the generation of a more fine‐grained user model. It can be used to fully replace any direct input to the machine, establishing effective, goal‐oriented implicit control of a computer system. I will give a more detailed example showing the potential of this approach. These approaches fuse human and machine information processing, introduce fundamentally new notions of 'interaction', and allow completely new neuroadaptive technology to be developed. This technology bears specific relevance to auto‐adaptive experimental designs, but opens up paradigm shifting possibilities for human‐machine systems in general, addressing the issue of asymmetry and widening the above‐mentioned communication bottleneck.
2016-03-10
A neurobiological learning model inspired by deep learning, and its application to image classification
Mark D. McDonnell
+ moreIn computer science, 'deep learning' approaches are at last realizing the decades-old theoretical potential of artificial neural networks (ANNs), now to frequently achieve better-than-human performance on difficult pattern recognition tasks. When applied to classification and detection of objects in images, deep convolutional ANNs are used, and are often characterized as "biologically inspired." This is due to the hierarchy of layers of nonlinear processing units and pooling stages, and learnt spatial filters resembling simple and complex cells. An open challenge for computational neuroscience is to identify whether the spectacular performance of deep learning can be replicated in detailed models of cortical neurobiology that are constrained by known anatomical and physiology. Of particular importance is to identify neurobiologically-plausible learning rules that can produce equal performance to the backpropagation and stochastic gradient descent algorithms used as standard methods when training deep ANNs. Motivated by this goal, in this talk I will show mathematically how a standard cost-function used for supervised training of ANNs can be decomposed into an unsupervised decorrelation stage and a supervised Hebbian-like stage. Using the method to train a network with the MNIST handwritten digits image database results in classification of the MNIST test image set with less than a 1% error rate. This performance is comparable with state of the art deep-learning algorithms applied to this well-known benchmark. Surprisingly, this result is achieved by relying on untrained random synaptic weights and/or convolutional filters in all network layers except the final one. In the remainder of the talk I will posit that the method is plausible as a neurobiological learning mechanism in recurrently-connected layer 2/3 and layer 4 cortical neurons. I will demonstrate this using a conceptual model that includes:
* nonlinear dendritic activation;
* anti-Hebbian plasticity at synapses on distal dendrites receiving lateral input from other principal cells;
* top-down modulation during learning;
* lateral inhibition enforcing winner-take-all effects to determine inference.
Biography:
A/Prof. Mark D. McDonnell received a PhD in electronic engineering and applied mathematics
from The University of Adelaide, Australia, in 2006. He is currently Associate Research Professor at the University of South Australia, which he joined in 2007. He has been awarded two research fellowships by the Australian Research Council, from 2007-2009 and 2010-2014, and the South Australian Tall Poppy of Science award. McDonnell's research focuses of the use of computational and engineering methods to advance knowledge about the influence of noise and random variability in neurobiological computation. McDonnell has published over 80 refereed papers, including several review articles, and a book on stochastic resonance, published by Cambridge University Press. McDonnell is a member of the editorial board of PLoS One and Fluctuation and Noise Letters, and has served as a Guest Editor for Proceedings of the IEEE and Frontiers in Computational Neuroscience.
2016-03-03
Micro-movement statistics biomarkers may help diagnose and develop therapies for individuals with Autism Spectrum Disorders
Jorge Jose James H. Rudy Distinguished Professor of Physics
Condensed Matter Physics and Biophysics (Theoretical)
http://www.iub.edu/~iubphys/faculty/jjosev.shtml
Our daily movements are made of variable behaviors that can be studied at different time and length scales: For example, most people can easily achieve the simple task of reaching a cup in front them, but no two people will have exactly the same movements when we zoom in their trajectories at millisecond time scales. Most current movement studies are mainly based on visual observations of performances in motor tasks, which may leave out important information at finer time scales, often considered as noise. Atypical behaviors are actually highly heterogeneous in people with neurological disorders, e.g. like Autism Spectrum Disorders (ASD), Parkinson and Schizophrenia. This heterogeneity has particularly impeded developing efficient and quantitative biological diagnoses for these disorders when they are only based on human eye observations. There is thus a critical need to identify objective and data-driven biomarkers for these disorders as guides for basic biological research studies. Recent advent of high-resolution wearable sensing devices enable continuous motion recordings at milliseconds time scales, away from detection of the naked eye. Using this technology, we asked the question as to whether we could extract information leading to quantitative biomarkers for these disorders based on natural movement studies. I will only discuss our results for ASD individuals. By studying in detail the movement's statistics of human natural hand movements, we unraveled a new data-type characterized by the smoothness levels of the speed kinematics. Our statistical analysis led to a parameter plane that provides an automatic screening of different ASD subjects linking it, a posteriori, with their verbal speaking abilities. We also found different maturation paths in ASD compared to those typically developing. Unexpected similarities are also found among ASD parents and their progenies. Our studies are presently being used as part of a clinical trial testing for a genetically generated type of Autism.
2016-02-25
Applying Perceptual Learning Principals to Brain Training Games
Aaron Seitz Professor, Department of Psychology, and Director of the Brain Games Center
University of California, Riverside
http://faculty.ucr.edu/~aseitz/
Imagine if you could see better, hear better, have improved memory, and even become more intelligent through simple training done on your own computer, smartphone, or tablet. Currently brain training approaches are making these promises, however the reality falls short of the potential. Here I discuss how research in the field of perceptual learning can be translated to potentially yield a new generation of brain training approaches that are more effective and transfer to real world activities. In the present research, we adopted an integrative approach where the goal is not to achieve highly specific learning but instead to achieve general improvements to vision. We combined multiple perceptual learning approaches that have individually contributed to increasing the speed, magnitude and generality of learning into a perceptual-learning based video-game. Our results demonstrate broad-based benefits of vision in a healthy adult and visually impaired populations. We find improvements in near and far central vision peripheral acuity and contrast sensitivity, and real world on-field benefits in baseball players. The use of this type of this custom video game framework built up from psychophysical approaches takes advantage of the benefits found from video game training while maintaining a tight link to psychophysical designs that enable understanding of mechanisms of perceptual learning and has great potential both as a scientific tool and as a basis for future brain training approaches.
2016-02-18
Computational Ethnography and Multimodal Sensing for Healthcare
Nadir Weibel CSE Department,
DesignLab, Center for Wireless and Population Health Systems, Calit2
The advent of new sensing modalities, from ubiquitous and mobile computing to big data, is opening up new avenues for better understanding human cognition and behavior. Technology such as depth cameras, eye-tracking, or wearable sensing devices enable the tracking of people's activity in the real world, and online social media presence often reveals much of our day-to-day lives. While these new kind of data promise to advance our knowledge in many domains, applying this technology to healthcare has the potential to have an impact on the lives of many people from single individuals to larger groups.
In this talk I will introduce our approach towards new methodologies for multimodal sensing and visualization of healthcare-related activity in the real world. I will introduce our Lab-in-a-Box infrastructure, and how the combination of a multimodal sensing infrastructure and a multimodal visualization tool allow us to understand real-world healthcare in different ways. I will discuss results from tracking activity in the medical office and introduce our initial work in the context of surgical ergonomics, stroke evaluation and sign language analysis, including novel visualization approaches.
Bio
----------------------------
Dr. Nadir Weibel is a Research Faculty at UC San Diego's CSE Department and a Research Health Science Specialist at the VA San Diego. His work spans computer science and engineering, cognitive science, and the health domain and focuses on studying the impact of interactive technology on healthcare. As a member of the DesignLab (http://designlab.ucsd.edu), as well as the Center for Wireless and Population Health Systems (http://cwphs.ucsd.edu) at UCSD he is spending his time between developing novel methodologies to better understand behavior and activity in healthcare, and designing new prototypes and interactive technology at the intersection of Human-Computer Interaction and Ubiquitous computing to better support patients, care-givers and health professionals. His research is funded by the National Institute of Health (NIH), the National Science Foundation (NSF), the Center for AIDS Research (CFAR), the Agency for Healthcare Research and Quality (AHRQ), as well as by UC San Diego internal funding and the Moxie Foundation.
2016-02-11
Can porn be addictive? The use of the Research Domain Criteria (RDoC) framework in studies of new psychological disorders
Mateusz Gola Swartz Center for Computational Neuroscience, UC San Diego, Institute of Psychology, Polish Academy of Sciences
+ more2016-02-04
Engineering Superpowers: Leveraging Theoretical Neuroscience to Maximize Human Potential
Vivienne Ming Founder & Executive Chair
Socos https://www.socoslearning.com/
A wide-variety of societal problems can be framed as the challenge of connecting abstract, longer-term gains to highly local, individual decisions.
How can smartphone data across tens of thousands of individuals predict manic episodes in bipolar sufferers for prophylactic treatment? What should a recruiter look for in a candidate to optimize company-wide productivity over time? What can a parent do right now to maximize a child's health and educational outcomes?
In this talk, Dr. Ming will discuss a series of projects which apply theoretical neuroscience methodology to high-level problems in computational social science and are deployed in "the wild". Dr. Ming's goal is to maximize human potential by combining neuroscience, labor economics, machine learning, and product development.
2016-01-21
Training for Transfer: Opportunities and Challenges for Application in Schools
Zewelanji Serpell Associate Professor, Dept. of Psychology
Virginia Commonwealth University
http://www.psychology.vcu.edu/people/serpell.shtml
Recent advances in cognitive science support the view that cognitive skills, such as executive functions, are malleable in childhood and through adolescence. This talk presents findings from a set of studies testing the efficacy of one-on-one and computer-based cognitive training programs with adolescents in lab and school settings. Findings suggest some success in improving cognitive skills, particularly working memory. Training modality matters, however, and there is little evidence of far transfer to academic skills. The talk goes on to describe our efforts to develop more ecologically valid and culturally responsive methods to train African American elementary school students by applying cognitive training principles within a school-based chess program. To conclude, I discuss the challenges associated with achieving and measuring transfer of cognitive training gains to academic and behavioral domains that are meaningful to schools.
2016-01-14
Towards Pervasive and Real-World Neuroimaging and BCI
Tim Mullen Director, Qusp Labs (formerly Syntrogi Labs)
Co-Founder & CEO, Qusp
I will discuss and demonstrate recent efforts by our group towards evolving a new generation of real-world and pervasive brain-computer interface (BCI) and neuroimaging technology. I will discuss some of our recent research in this domain, including a recent collaboration between Qusp, Cognionics and INC developing a high-resolution dry mobile BCI system supporting real-time artifact rejection, imaging of distributed cortical network dynamics, and inference of cognitive state with a 64-channel dry-electrode wireless EEG headset. I will also briefly outline Qusp's vision of enabling easy integration of advanced bio-signal processing methods into diverse everyday applications. I will discuss and demonstrate applications of NeuroScale - a cloud-based software platform, providing continuous real-time interpretation of brain and body signals through an Internet API - as well as Neuropype - a Python-based graphical software environment for rapid design and deployment of pipelines for (real time) bio-signal processing and machine learning.
2015-11-19
Prospective optimization with limited resources
Joe Snider Institute for Neural Computation, UC San Diego
http://inc.ucsd.edu/~poizner/jsnider.html
The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. We designed a task in which humans select actions from an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touch screen at variable speeds. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching disks one at a time in a rapid sequence, forming an upward path across the grid. Every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of the natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. Comparisons of human behavior with the behavior of ideal actors revealed the strategies used by humans in terms of how far into the future they looked (their "depth of computation") and how often they attempted to incorporate new information about the future rewards (their "recalculation period"). For a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase their recalculation period rather than sacrifice the precision of computation.
2015-11-05
Beyond Steering in Human-Centered Closed-Loop Control
Lewis Chuang Max Planck Institute for Biological Cybernetics
http://www.lewischuang.com
Machines provide us with the capacity to achieve goals beyond our physical limitations. For example, automobiles and aircraft extend our physical mobility, allowing us to travel vast distances far ahead of the time it would take us otherwise. It is truly remarkable that our natural perceptual and motor capabilities are able to adapt, with sufficient training, to the unnatural demands posed by vehicle handling. While much progress has been achieved in formalizing the control relationship between the human operator and the controlled vehicle, considerably less is understood with regards to how human cognition influences this control relationship. Such an understanding is particularly important in the prevalence of autonomous vehicular control, which stands to radically modify the responsibility of the human operator from one of control to supervision. In this talk, I will first explain how the limitations of a classical cybernetics approach can reveal the necessity of understanding high-level cognition during control, such as anticipation and expertise. Next, I will present our research that relies on unobtrusive measurement techniques (i.e., gaze-tracking, EEG/ERP) to understand how human operators seek out and process relevant information whilst steering. Examples from my lab will be used to demonstrate of how such findings can effectively contribute to the development of human-centered technology in the steering domain, such as with the use of warning cues and shared control. Finally, I will briefly present some efforts in modeling an augmented aerial vehicle (e.g., civil helicopters), with the goal of making flying a rotorcraft as easy as driving (www.mycopter.eu).
Biography: Lewis Chuang received his PhD. In Neuroscience in 2011 from the University of Tübingen. He currently leads a research group in the Max Planck Institute for Biological Cybernetics that investigates information seeking and processing behavior during closed-loop steering. He is also a principal investigator in a recently established research center for Quantitative Methods for Visual Computing (www.trr161.de).
2015-11-03
A Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony
Ryad Benosman Vision and Natural Computation Group
Institut National de la Sante et de la Recherche Medicale, Paris, France
There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.
2015-10-29
Bayesian Inference in Distributed Architecture for Mobil Applications
Marcela Mendoza Bioengineering, and Neural Interaction Lab, UC San Diego
http://coleman.ucsd.edu/
Emerging mobile applications necessitate wireless transmission of large datasets and generate the need for efficient energy consumption. Exactly digitizing and transmitting these data is energy costly and leaves devices vulnerable to security attacks. Most decisions made with these data are statistical. From a Bayesian point of view, an accurate way to represent uncertainty and minimize risk in decision-making is via the posterior distribution. However, a way of accurately calculating the posterior has been traditionally unobtainable.
In this talk, I will present a distributed framework for finding the full posterior distribution and show its implementation in a suit of energy-efficient architectures. We focus on problems where the latent signal can be modeled as sparse (LASSO). We leverage our recent results of formulating Bayesian inference as a KL divergence minimization problem. We show that drawing samples from the Bayesian LASSO posterior can be done by iteratively solving LASSO problems in parallel. We instantiate this result with an analog-implementable solver and with a Graphics Processor Unit solution. These architectures are amenable to mobile applications and only transmit the minimal relevant information (e.g. the posterior) for optimal decision-making.
2015-10-15
EEGLAB -- Recent Developments and Future Directions
Arnaud Delorme Swartz Center for Computational Neuroscience, INC, UC San Diego
http://sccn.ucsd.edu/~arno/
EEGLAB is a software environment developed by the Swartz Center for Computational Neuroscience at the University of California, San Diego, running on the very broadly established MATLAB platform to be a processing environment that can be applied to all major EEG hardware configurations and that provides a broad palette of the most advanced analysis procedures for research in this increasingly exciting functional brain imaging modality. A survey of 687 research respondents has reported EEGLAB to be the software environment most widely used for electrophysiological data analysis, worldwide, by a wide margin (neuro.debian.net/survey/2011/results.html). In this presentation I will highlight recent developments to the EEGLAB software environment, such as how to perform statistics on collection of single trials across subjects and future directions such as hierarchical statistical analysis using general linear models for group analysis.
2015-06-04
Slowly oscillating periodic solutions for stochastic DDEs with positivity constraints
Ruth Williams
+ moreDynamical system models with delayed feedback, state constraints and small noise arise in a variety of applications in science and engineering. Under certain conditions oscillatory behavior has been observed. Here we consider a prototypical fluid model approximation for such a system --- a one-dimensional delay differential equation with non-negativity constraints. We explore conditions for the existence, uniqueness and stability of slowly oscillating periodic solutions of such equations. We illustrate our findings with simple examples from Internet rate control and gene regulation.
Based on joint work with David Lipshutz.
2015-05-28
Dealing with Uncertainty: DARPA's New Paradigm for the 21st Century
Frank Fernandez
+ moreBio:
Dr. Frank Fernandez was Director of the Defense Advanced Research Projects Agency (DARPA), the central R&D organization of the Department of Defense, from 1998 to 2001. He was a member of the Chief of Naval Operations (CNO) Executive Panel from 1983 until his appointment at DARPA. In this capacity, he provided advice to the CNO on a variety of issues. Currently, Dr. Fernandez is Chairman of the Naval Research Advisory Committee (NRAC), a committee chartered by law to advise the Secretary of the Navy on critical R&D issues. He is also a member of the Department of Homeland Security Science and Technology Advisory Panel, reporting to the Undersecretary for Science and Technology.
Dr. Fernandez received his Bachelor of Science in Mechanical Engineering and Master of Science in Applied Mechanics from Stevens Institute of Technology in New York, 1960-1961; and his Ph.D. in Aeronautics from California Institute of Technology in 1969. He was a Distinguished Research Professor in Systems Engineering and Technology Management at Stevens Institute of Technology in Hoboken, New Jersey.
2015-05-21
Estimating Phasic and Sustained Dynamic Information Transfer in the Human Brain
Stephen Robinson MEG Core Facility, National Institute of Mental Health
http://kurage.nimh.nih.gov/meglab/
A bivariate nonlinear and nonparametric dynamical measure of directional information transfer is described that is suitable for analyzing electrophysiological signals such as magnetoencephalography (MEG), electroencephalography (EEG), and electrocorticography (ECoG). This analysis, "temporo-dynamic symbolic transfer entropy" (tdSTE), was applied to a representative MEG recording of a normal control subject while performing a working memory (n-back) task. A simultaneous linearly constrained minimum variance (LCMV) beamformer was used to estimate the source waveforms at nine selected brain locations. The tdSTE analysis was then applied to pairs of source waveforms, estimating both their forward and reverse directional information flow. The transfer entropy (TE) time-series were then averaged relative to the stimulus markers, either stimuli or responses, for each of the n-back tasks. The tdSTE analysis was evaluated for higher frequencies, above 50 Hz, avoiding the confound of lower frequency rhythms and emphasizing multi-unit cortical activity (MUA). The experimental tdSTE results reveal the presence of both sustained and phasic (event-related) components. The magnitude of the sustained components was much larger than their associated phasic components. Furthermore, we observed that the participation of information exchange between regions in each of the n-back tasks was encoded in the relative magnitudes of their sustained components. This was observed under condition that the TE for each n-back condition was based upon the probability distribution functions (PDFs) computed a priori from the corresponding blocks of data for the 0, 1, and 2-back trials. When PDFs were derived from the cumulative data of all three n-block tasks, little or no difference between 0, 1, and 2-back was observed. These results were validated against a variant of sequence shuffled, "surrogate" data, showing that tdSTE can reliably estimate directional information flow from the MEG data of single, individual subjects.
2015-05-07
Insights Into Insight: What EEG Reveals about Problem Solving Across Multiple Domains
Ying Wu Swartz Center for Computational Neuroscience, INC
http://sccn.ucsd.edu/~ywu/
Problems can be solved in a variety of ways. One might systematically evaluate a known space of possible solutions until the right one is found. Alternatively, it may prove necessary to enlarge or restructure the expected problem space – so called "thinking outside the box." This approach can yield an experience of unexpected insight or feeling of Aha!. Whereas the subjective suddenness of an "Aha!" moment may lead to the impression that insight must be precipitated by a set of discrete, short-lived neural events, I will present evidence that even before a problem is presented, scalp-recorded measures of resting or baseline brain states are linked with future performance and likelihood of experiencing insight during the search for a solution. Additionally, I will show that compared to more systematic problem solving approaches, insight is accompanied by differences in cortical and likely cognitive engagement that are detectable throughout much of the problem solving phase, rather than being confined to a distinct interval immediately preceding the dawn of a solution.
2015-04-09
Role of Neuromodulators and Neural Correlations in Network Encoding
Victor Minces UC San Diego Cognitive Science
Temporal Dynamics of Learning Center
http://tdlc.ucsd.edu
A fundamental variable in understanding the relationship between brain activity and sensory processing is the coding efficiency, or how much information about a set of stimuli a neuronal pool represents. Coding efficiency depends on the information represented by the individual neurons (associated with their signal to noise ratios), but also on the statistical dependencies among neurons (associated with their correlated activity); the influence of the latter becomes more important as the size of the neural pool under consideration is larger. I present a novel, simple way to estimate the encoding efficiency of neuronal pools in terms of signal to noise ratios and pairwise correlations. This approach allows exploration of the role of neuronal correlations in shaping coding efficiency. I apply this formulation to experimental data gathered from the visual cortex of the awake mouse, and show that neuromodulator acetylcholine shapes neural correlations in a manner that is compatible with enhanced encoding efficiency, learning, and attention.
2015-03-12
Cell Assemblies of the Basal Forebrain
Douglas A. Nitz Dept. of Cognitive Science, UC San Diego
http://dnitz.com/
Cortically-projecting basal forebrain neurons play a critical role in learning and attention, and their degeneration accompanies age-related impairments in cognition. Despite the impressive anatomical and cell-type complexity of this system, currently available data suggest that basal forebrain neurons lack complexity in their response fields, with activity primarily reflecting only macro-level brain states such as sleep and wake, onset of relevant stimuli and/or reward obtainment. The current study examined spiking activity of basal forebrain neuron populations across multiple phases of a selective attention task. Clustering techniques applied to the full population revealed bursting and non-bursting subtypes as well as a number of distinct categories of task-phase-specific activity patterns. Distinct population firing-rate vectors defined each task phase and most categories of task-phase-specific firing had counterparts with opposing firing patterns. Finally, among all subtypes of simultaneously recorded basal forebrain neurons, co-activity patterns evidenced grouping of neurons into cell assemblies whose spiking activity was optimally synchronized at a beta frequency (~20 Hz). Thus, consistent with known anatomical complexity, basal forebrain population dynamics are capable of differentially modulating their cortical targets over beta-frequency time intervals and according to the unique sets of environmental stimuli, motor requirements, and cognitive processes associated with different task phases.
Biography: Douglas Nitz received his PhD from UCLA in 1995 working primarily on brainstem mechanisms of rapid-eye-movement sleep production. As a post-doctoral student at the University of Arizona, he turned his attention to the problem of determining how single neurons and the ensemble activity patterns they compose map spatial relationships between an organism and its environment. This work continued at the Neurosciences Institute in San Diego where he worked between 1998-2008. Nitz joined UCSD's Department of Cognitive Science in 2008 and continues to work on neural mechanisms for spatial cognition and its translation into decisions and actions. The basal forebrain work to be presented is the outgrowth of a new research project undertaken with Andrea Chiba, also of the UCSD Cognitive Science Department.
2015-03-05
Cognitive Networks and the Noisy Brain
Bradley Voytek UC San Diego Cognitive Science, Neurosciences, and INC
http://darb.ketyov.com/
Perception, cognition, and social discourse depend upon coordinated neural activity. This coordination operates within noisy, overlapping, and distributed neural networks operating at rapid timescales. These networks are built upon a structural scaffolding with intrinsic neuroplasticity that changes with development, aging, disease, and personal experience. While the exact mechanisms for interregional communication are unknown, there is increasing evidence that oscillatory local field synchronization between neuronal groups facilitates communication at specific phases of the preferred oscillatory frequency. Successful interregional communication may rely upon the transient synchronization between distinct low frequency (< 80 Hz) oscillations, allowing for brief windows of communication via phase-coordinated local neuronal spiking. However such a communication scheme would be susceptible to small perturbations in spiking rate, probability, and/or synchronization. I will explore the consequences of this theory in terms of understanding cognition and a variety of neurological and psychiatric disorders.
2015-02-26
High-Resolution EEG Source Imaging
Zeynep Akalin Acar UC San Diego INC Swartz Center for Computational Neuroscience
http://sccn.ucsd.edu/~zeynep/
Accurate electroencephalographic (EEG) source localization requires a forward electrical head model incorporating accurate conductivity values for the major head tissues. While consistent values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both measurement method and inter-subject differences. In simulations, mis-estimation of skull conductivity produce source localization errors as large as 31 mm (Akalin Acar and Makeig 2013). In this presentation, I will describe a gradient-based iterative source conductivity and localization estimation (SCALE) approach for estimating head tissue conductivities and spatial brain source distributions simultaneously in a magnetic resonance (MR) head image-derived head model based on scalp maps of near-dipolar sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. I will show validations using simulated data, and applications on real EEG data from two adults and babies. The ability to accurately estimate skull conductivity non-invasively from recorded EEG data itself, in combination with an electrical head model derived from a subject anatomic MR head image, could remove a barrier to using EEG as a cm-scale accurate 3-D functional cortical imaging modality.
2015-02-19
Neuromorphic Cognition
Emre Neftci INC and BCI, UC San Diego
http://isn.ucsd.edu/~emre/
Our ability to evoke intelligent processing on artificial neural systems goes hand in hand with a confluence of neuroscience, machine learning and engineering. I will describe recent advances in neuromimetic inference and learning algorithms that address this challenge from a neuromorphic systems perspective. These algorithms range from finite state machines synthesized with neural models of working memory, attention and action selection for solving cognitive tasks; to the learning of probabilistic generative models with models of stochastic sampling and plasticity in spiking neural networks. These advances form the groundwork for a domain-specific language for probabilistic models that can be compiled against neural substrates. Combined with state-of-the-art neuromorphic electronic hardware, this framework will provide a unique technology for studying the processes of the mind at multiple levels of investigation.
2015-02-12
Memcomputing: Computing with and in Memory Using Collective States
Massimiliano Di Ventra Department of Physics, UC San Diego
http://physics.ucsd.edu/~diventra/
I will discuss a novel computing paradigm we named memcomputing [1] inspired by the operation of our own brain which uses (passive) memory circuit elements or memelements [2] as the main tools of operation. I will first introduce the notion of universal memcomputing machines (UMMs) as a class of general-purpose computing machines based on systems with memory. We have shown [3] that the memory properties of UMMs endow them with universal computing power--they are Turing-complete--, intrinsic parallelism, functional polymorphism, and information overhead, namely their collective states can support exponential data compression directly in memory. It is the presence of collective states in UMMs that allows them to solve NP-complete problems in polynomial time using polynomial resources. As an example I will show the polynomial-time solution of the subset-sum problem implemented in a simple hardware architecture that uses standard microelectronic components [4]. Even though we have not proved NP=P within the Turing paradigm, the practical implementation of these UMMs would represent a paradigm shift from present von Neumann architectures bringing us closer to brain-like neural computation [5].
[1] M. Di Ventra and Y.V. Pershin, Computing: the Parallel Approach, Nature Physics, 9, 200 (2013).
[2] M. Di Ventra, Y.V. Pershin, and L.O. Chua, Circuit Elements with Memory: Memristors, Memcapacitors, and Meminductors, Proc. IEEE, 97, 1717 (2009).
[3] F. L. Traversa and M. Di Ventra, Universal Memcomputing Machines, IEEE Transactions on Neural Networks and Learning Systems, (in press), arXiv:1405.0931.
[4] F. L. Traversa, C. Ramella, F. Bonani, and M. Di Ventra, Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states, arXiv:1411.4798
[5] F. L. Traversa, F. Bonani, Y.V. Pershin and M. Di Ventra, Dynamic Computing Random Access Memory, Nanotechnology 25, 285201 (2014).
Bio: Massimiliano Di Ventra obtained his undergraduate degree in Physics summa cum laude from the University of Trieste (Italy) in 1991 and did his PhD studies at the Ecole Polytechnique Federale de Lausanne (Switzerland) in 1993-1997. He has been Research Assistant Professor at Vanderbilt University and Visiting Scientist at IBM T.J. Watson Research Center before joining the Physics Department of Virginia Tech in 2000 as Assistant Professor. He was promoted to Associate Professor in 2003 and moved to the Physics Department of the University of California, San Diego, in 2004 where he was promoted to Full Professor in 2006. Di Ventra's research interests are in the theory of electronic and transport properties of nanoscale systems, non-equilibrium statistical mechanics, DNA sequencing/polymer dynamics in nanopores, and memory effects in nanostructures for applications in unconventional computing and biophysics. He has been invited to deliver more than 200 talks worldwide on these topics (including 6 plenary/keynote presentations, 7 talks at the March Meeting of the American Physical Society, 5 at the Materials Research Society, 2 at the American Chemical Society, and 1 at the SPIE). He serves on the editorial board of several scientific journals and has won numerous awards and honors, including the NSF Early CAREER Award, the Ralph E. Powe Junior Faculty Enhancement Award, fellowship in the Institute of Physics and the American Physical Society. He has published more than 140 papers in refereed journals (13 of these are listed as ISI Essential Science Indicators highly-cited papers of the period 2003-2013), co-edited the textbook Introduction to Nanoscale Science and Technology (Springer, 2004) for undergraduate students, and he is single author of the graduate-level textbook Electrical Transport in Nanoscale Systems (Cambridge University Press, 2008).
2015-02-05
Corticospinal Computation of Sensorimotor Control for Normal and Abnormal Movements
Ning Lan Institute of Rehabilitation Engineering
Med-X Research Institute
School of Biomedical Engineering
Shanghai Jiao Tong University
Evidence in human motor behaviors suggests that separate motor modules are used for control of movement and posture in the central nervous system (CNS). Each contains private central programming and corticospinal pathway of motor commands to the spinal alpha and gamma motoneurons (MNs). Abnormal motor behaviors, such as tremor in patients with Parkinson's disease (PD), demonstrate the similar feature of modularity. In this presentation, I will discuss a combined behavioral and computational approach to understanding the corticospinal computation of sensorimotor control for both normal and abnormal movements. A modular control model for movement and posture is proposed based on the dual spinal alpha-gamma sensorimotor system. In this study, we ask these fundamental questions. How can the alpha-gamma sensorimotor system implement modular control? And what is the computational role of propriospinal neurons (PN) in modular control of movements (both normal and abnormal)? Simulated model behaviors capture kinematic and EMG features of reach-and-hold human movements. Furthermore, the modular control model is able to predict pathological behaviors of action tremor in essential tremor (ET) patients and resting (or posture) tremor in PD patients. These results suggest a computational gating function of PN network for transmission and processing descending motor commands (both normal and abnormal), and support the hypothesis that modular control of posture and movement can be achieved with the dual alpha-gamma sensorimotor system.
Bio: Professor Ning Lan obtained the B.S. degree in Precision Instruments from Shanghai Jiao Tong University (SJTU) in 1982, and Ph.D. degree in Biomedical Engineering from Case Western Reserve University (CWRU) in 1989. Before joining SJTU, he was on the faculty in Biokinesiology and Physical Therapy of University of Southern California. Currently, he serves as a guest associate editor of Frontiers in Computational Neuroscience of the Nature Publishing Group, and is on the editorial board of ISRN Computational Biology, and Physical Medicine and Rehabilitation - International. He also serves as the Founding Deputy Director of The Strategic Alliance for Research and Development of Rehabilitation and Assistive Technologies for Medical Industries in China. He was one of the founding members of Neural Engineering Committee of the Chinese Society of Neuroscience, and served the founding depute director from 1995 to 1999. From 1997 to 2001, he served as Assistant Editor of IEEE Transactions on Rehabilitation Engineering (now IEEE Transactions on Neural Systems and Rehabilitation Engineering), and Associate Editor of Chinese Journal of Rehabilitation Theory and Practice from 1997-1999. He organized 1st, 2nd and 3rd International Conference on Rehabilitation Medical Engineering (CRME) in Shanghai, China in 2012, 2013 and 2014. His research interests are in neural electrical stimulation, neuromodulation for patients with Parkinson's disease, stroke and spinal cord injury, and neural and computational modeling of movement control.
2014-12-04
Nanoscale engineering mediating neural function and activity
Ratnesh Lal MAE, Bioengineering and CNME/IEM
http://lal.eng.ucsd.edu/
Coordinated activity of ion channels and receptors in brain cells control electrical and chemical signal transduction and their synaptic transmission mediating normal brain activity and pathologies. Current emphasis of the BRAIN Initiative has been to design enabling technology to understand ensemble brain activity. Defining nanoscale (< 10 nm) structural conformations of ion channel/receptors mediating brain activity (though essential for controlling intricate brain connectivity) is unappreciated and yet these nanostructures would ultimately be driving any remedial paradigm(s) resulting from the functional mapping initiative. Unfortunately, there aren't many techniques to image 1-10 nm biological structures in liquid. We have been developing an array-atomic force microscope (AFM) integrated with functional analytical tools (e.g., electrical conductance measurement, FRET, TIRF), each individual AFM consisting of an array of conducting cantilevered probes with self-sensing and actuation capabilities. The new AFM-array will enable 1) imaging the synaptic network at the scales of its organization, nano-to-macro scale, 2) measuring localized electrical and chemical activity, and 3) interfacing with animal and human subjects. This novel technology will allow for force controlled imaging of live neural cells at multiple locations simultaneously with independent imaging feedback. Integration of an ion sensing tip on the cantilevers will allow for localized and highly parallel electrical recording of synaptic activity. This technology will enhance our understanding of how synaptic networks mediate global neural communication.
2014-11-13
Advances in measurement of sleep
Conor Heneghan University College Dublin, and ResMed
http://www.resmed.com/
Despite the fact that we spend nearly one third of our lives asleep, surprisingly little was known about sleep until the 20th century. Now, sleep medicine is firmly established as a significant branch of medical practice, taking its roots strongly from the work of Nathaniel Kleitman and colleagues at the University of Chicago in the 1950s. The field progressed in the 1960s, with an increasing standardization of physiological signal recording that led to the current standard for sleep measurement—the polysomnogram (PSG). Recently, there has been continued interest in developing sleep measurement technologies that can provide useful information about sleep, over multiple nights, and with minimal interference to the subject. One technology that shows a lot of promise in this area is radio-frequency (RF) biomotion sensing of sleep. For the last several years, our research team has focused on producing a noncontact RF biomotion sensor, which is practical for use in home and lab-based sleep measurement. Our goal has been to simplify the process of sleep and respiration measurement, allowing continuous monitoring over multiple nights—permitting individuals to understand their own sleep patterns or enabling medical professionals to provide improved care and guidance to individuals suffering from a number of sleep and respiratory disorders. We have developed algorithms that can map the movement signal into useful information about sleep and respiration. In studies where the sensor and algorithm are compared with the gold-standard PSG measurements, the noncontact system agrees with the sleep/wake classification of the PSG more than 85% of the time. This is comparable with the best actigraphy systems. Moreover, since the system can measure respiratory effort, it can be used to identify apnea and hypopnea events with a good degree of accuracy. In a study of 74 subjects suspected of having sleep apnea, the noncontact sensor system was 90% sensitive and 92% specific in recognizing patients with and without sleep apnea, using the standard cutoff of an Apnea Hypopnea Index greater than 15 to define sleep apnea. The ongoing challenge is to further improve the accuracy and sensitivity of the technology and, ideally, to add in further information without compromising the convenience and noninvasiveness of the overall system from a user's point of view.
Reference:
Conor Heneghan, "Wireless Sleep Measurement: Sensing Sleep and Breathing Patterns Using Radio-Frequency Sensors," IEEE EMBS Pulse Magazine, September 21, 2014.
http://pulse.embs.org/september-2014/wireless-sleep-measurement/
Biography:
Conor Heneghan, PhD, is Chief Engineer with ResMed's Strategy and Ventures Group, and Adjunct Associate Professor at University College Dublin School of Electrical, Electronic and Communications Engineering. He received his PhD in Electrical Engineering from Columbia University, New York in 1995, and was co-founder of BiancaMed, a pioneer in non-contact sleep measurement which was acquired by ResMed in 2011. His research interests are biomedical signal processing and analysis, particularly focused in the areas of sleep, cardiovascular and respiratory disorders.
2014-10-30
Characterizing Neural Ensembles from High-Resolution Physiological Recordings
Joaquin Rapela Swartz Center for Computational Neuroscience, UC San Diego
http://sccn.ucsd.edu/~rapela
If we observe a fluid at the molecular level we see random motions, but if we look at it macroscopically we may see a smooth flow. An intriguing possibility is that by analyzing brain activity at a macroscopic level, i.e., at the level of neural ensembles, we may discover patterns not apparent at the single-neuron level, that are as useful as velocity or temperature are to understand, and predict, the motion of fluids. Several models have been developed to simulate the activity of ensembles of neurons, but only now, with the availability of high-resolution neural recordings, it is possible to accurately estimate parameters in these models from physiological data, and learn from these parameters how ensembles represent information in the brain. In this talk I will describe methods that we are developing to characterize neural ensembles from electrophysiological recordings, and comment on two applications of these methods that we are currently pursuing.
I will show how starting from a model of single neuron of a given type (e.g., Hodgkin and Huxley) it is possible to derive accurate dynamical models of ensembles of homogeneous neurons of the given type. We call these models ensemble density models or EDMs. EDMs are high-dimensional nonlinear dynamical models. To facilitate the estimation of state variables and parameters in large networks of EDMs from physiological data, we derived a method that significantly reduced the dimensionality in EDMs, with minor degradation of approximation power. We are using a faster maximum-likelihood method for the estimation of connectivity parameters in networks of EDMs, and an MCMC algorithm that approximates the expected value, as well as higher moments, of both states and connectivity parameters, conditioned on observed data. I will outline two applications of these methods: 1) the study of the role of connectivity among neural ensembles for the control of vocal articulators during speech production, using high-resolution ECoG recordings in humans; and 2) the estimation of ensemble receptive fields in sensory cortices.
We want to apply these tools to characterize diverse ensemble electrophysiological recordings. If you have these type of recordings, and you may want to analyze them at the ensemble level, please contact the speaker at rapela@ucsd.edu.
Reference: J. Rapela, M. Kostuk, P. Rowat, T. Mullen, K. Bouchard, and E. Chang, "Characterizing Neural Activity at the Ensemble Level," IEEE EMBS BRAIN Grand Challenges Conference, Washington DC, Nov. 13-14, 2014. Available at http://sccn.ucsd.edu/~rapela/cbam/brainGrandChallenges14.pdf
2014-10-16
Nanoscale Electronic Synapses for Brain-Inspired Computing
H.-S. Philip Wong Department of Electrical Engineering and Stanford SystemX Alliance
Stanford University
http://nano.stanford.edu/
Nanoscale Electronic Synapses For Brain-Inspired Computing
Unlike classical enterprise computing that operates on structured, digital data, 21st century information technology (IT) must process, understand, classify, and organize the vast amount of data in real-time. Applications of such will be dominated by machine-learning kernels operating on Tbytes of active data with little data locality. At the same time, massively redundant sensor arrays sampling the world around us will give humans the perception of additional "senses" blurring the boundary between biological, physical, and cyber worlds. The challenge is to manage the resulting data deluge; e.g., processing 10^14 floating-point operations per second using 1 W between the retina and the brain, or a neural map yielding data at 1 Tbit/sec. Processing such data in wearable devices clearly demands computation well beyond the state of the art.
As information technology become pervasive in society and ubiquitous in our lives, the desire for always-on, always-available, embedded everywhere, and human-centric information systems calls for a different computation paradigm.
In this talk, I will describe the use of nanoscale electronic devices that emulate the functions of the biological synapse. The goal is to develop hardware technologies for brain-inspired computing and electronic emulation of the brain. Phase-change memory is employed to demonstrate the spike-timing-dependent plasticity (STDP) behavior of the biological synapse. A small array of such devices is connected in a recurrent Hopfield network to perform pattern recognition tasks and the tradeoff between variation tolerance and the speed/energy performance of the network is studied. The use of metal-oxide resistive switching memory (RRAM) presents another exciting opportunity. The stochastic nature of the physics of resistive switching enables RRAM to serve as analog weights in a neural network. It is possible to tune the RRAM to introduce randomness for hyper-dimensional computation for robust processing of perceptual data. I will describe on-going collaborative efforts to demonstrate in hardware small and medium-scale system applications using electronic synapses integrated with CMOS neurons.
References:
S. B. Eryilmaz, D. Kuzum, R. Jeyasingh, S. Kim, M. Brightsky, C. Lam, H.-S. P. Wong, "Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array," Frontiers in Neuroscience, 8:205 (2014). doi: 10.3389/fnins.2014.00205
D. Kuzum, S. Yu, H.-S. P. Wong, "Synaptic Electronics: Materials, Devices and Applications," Nanotechnology, 24. 382001, 2013. doi:10.1088/0957-4484/24/38/382001
S. Yu, B. Gao, Z. Fang, H. Yu, J. Kang, H.-S. P. Wong, "Stochastic Learning in Oxide Binary Synaptic Device for Neuromorphic Computing," Frontiers in Neuroscience, vol. 7, article 186, pp. 1–9, October 31, 2013. doi: 10.3389/fnins.2013.00186
S. Yu, B. Gao, Z. Fang, H. Yu, J. Kang, H.-S. P. Wong, "Stochastic Learning in Oxide Binary Synaptic Device for Neuromorphic Computing," Advanced Materials, Volume 25, Issue 12, pages 1774–1779, March 25, 2013.
D. Kuzum, R.G.D. Jeyasingh. S. Yu, H.-S. P. Wong, "Low-Energy Robust Neuromorphic Computation Using Synaptic Devices," IEEE Trans. Electron Devices, vol. 59, issue 12, pp. 3849–3894 (2012). DOI: 10.1109/TED.2012.2217146
2014-10-02
CARL-SJR: A Socially Assistive Robot With Rich Tactile Sensory Interaction
Jeffrey L. Krichmar Department of Cognitive Sciences and Department of Computer Science
University of California, Irvine
http://www.socsci.uci.edu/~jkrichma/
CARL-SJR: A SOCIALLY ASSISTIVE ROBOT WITH RICH TACTILE SENSORY INTERACTION
Research studies show that children with Autism Spectrum Disorders (ASD) or Attention Deficit Hyperactivity Disorders (ADHD) respond well to robot artifacts and suggest that robots nicely fitting into the goals of Sensory Integration Theory (SIT) might be a form of therapy for children with ASD or ADHD. SIT is intended to focus directly on the neurological processing of sensory information as a foundation for learning of higher-level (motor or academic) skills. Treatment goals center on improving sensory processing to either (a) develop better sensory modulation as related to attention and behavioral control, or (b) integrate sensory information to form better perceptual schemas and practical abilities as a precursor for academic skills, social interactions, or more independent functioning. To aim these goals, we present a novel neuromorphic robot that interacts with users through touch sensing and visual signaling on its whole surface. Our robot, which is called the Cognitive Anteater Robotics Laboratory – Spiking Judgment Robot (CARL-SJR), has a convex, hemispheric shell containing a matrix of trackballs for sensing touch and LEDs for communication with users. Currently CARL-SJR is in the prototype stage. It rides on Roomba for mobility and incorporates a spiking neural network (SNN) modeling somatosensory cortex. We explore tactile sensory decoding through rate coding and temporal coding. We also compare the performance of the two coding schemes for classifying different tactile inputs from hand movements. Our evaluation of the network's ability to categorize hand movements shows both rate and temporal coding performed well. The results could guide us to build a sophisticated spiking neural network to achieve treatment goals through learning, adapting, and shaping users' behaviors.
Joint work with Liam D. Bucci and Ting-Shuo Chou
2014-06-19
Impact of Stochastic Vesicle Variability on Spiking in the Peripheral Auditory System
Mark McDonnell Computational and Theoretical Neuroscience Laboratory
Institute for Telecommunications Research
University of South Australia
http://www.itr.unisa.edu.au/ctnl
Synaptic vesicle release is known to be governed by stochastic biophysical processes. This manifests as random 'noisy' variations in post-synaptic current, and stochastic post-synaptic spiking patterns. The probability of vesicle release can change over time, resulting in short-term plasticity effects such as depression and facilitation. Well-known phenomenological models characterise these effects in, for example, cortical pyramidal neurons.
However, the influence of stochastic synaptic dynamics on neuronal spiking is nowhere more stark than in the peripheral auditory system. For example, many auditory nerve fibers spike 'spontaneously' at high rates (100 spikes per second) in the absence of acoustical stimulation. Unlike cortical neurons, these nerve fibers receive synaptic input from ribbon synapses in inner-hair cells, which exhibit time-continuous graded responses to sounds rather than discrete spiking. Intra-cellular calcium dynamics in inner-hair cells is likely to strongly influence vesicle-release.
In this talk I will describe preliminary work on introducing short-term depression and calcium channel noise into models of inner hair-cell synaptic dynamics. The objective of this work is to extend existing models so that they accurately capture both long-term and short-term spike correlations observed in experimental recordings from auditory nerve fibers.
Auditory nerve fibers send their spikes to cells in the cochlear nucleus, some of which also exhibit stochastic short term plasticity. I will also briefly describe how for such cells the number of parallel incoming synapses interact with short-term depression to cause varying phase shifts in post-synaptic spiking in response to periodically-modulated pre-synaptic spiking.
2014-06-05
Noise-benefits in Backpropagation Training
Osonde Osoba Ming Hsieh Department of Electrical Engineering
University of Southern California
osondeos@usc.edu
The talk will present recent work that shows how careful noise injection can speed up the convergence of the popular backpropagation training algorithm for feedforward neural networks. This result is based on prior work that showed how careful noise injection speeds up the convergence of Expectation-Maximization (EM) algorithms for maximum-likelihood estimation with missing or corrupted data. The crucial link is the new fact that the backpropagation algorithm is a special case of a generalized EM algorithm. Other special cases of noise-boosted EM include the popular k-means clustering algorithm used in big-data processing and the Baum-Welch algorithm used to train hidden Markov models. The noise boosting also extends to speeding up the extensive training involved in using convolutional neural networks (CNNs) for image classification. The following link provides an implementation of the noise-boosted backpropagation training algorithm for CNNs: http://sail.usc.edu/~audhkhas/software/NCNN.zip
Bio: Osonde Osoba is a postdoctoral researcher at the Signal and Image Processing Institute at the University of Southern California (USC). He is also an instructor at USC's Viterbi School of Engineering. He received his PhD in Electrical Engineering from USC in August 2013 under the advisement of Prof. Bart Kosko. His dissertation was on "Noise Benefits in Expectation-Maximization Algorithms." He has interned at RAND and Intel where he worked on stochastic optimization algorithms and machine learning. He was a Ming Hsieh Institute Ph.D. scholar, a National GEM fellow, and an Annenberg fellow.
2014-05-29
Towards an Understanding of the Neural Mechanisms Underlying Human Postural Control
Manuel Hernandez Poizner Laboratory, Institute for Neural Computation
http://inc.ucsd.edu/poizner/
Falls are a significant cause of mortality and serious injury in older adults and particularly in people with neurological disorders, such as Parkinson's disease. The ability to maintain balance and postural control is commonly evaluated using center of pressure (COP) data. Methods such as the Stabilogram Diffusion Analysis have examined the stochastic characteristics of the COP but require numerous, long duration trials for reliable measures. To further our understanding of the underlying dynamical processes in postural control, a new conceptual framework for studying human postural control using the COP velocity autocorrelation function is proposed and its results are compared to Stabilogram Diffusion Analysis. This work suggests a concise and reliable measure of postural control that may further our understanding of the underlying mechanisms behind balance dysfunction in neurological populations and provide a tool for quantifying future neurorehabilitative interventions aimed at improving balance.
2014-05-22
MEG Comparisons of Shared Information Among Schizophrenic Patients, Their Unaffected Siblings and Normal Controls
Stephen E. Robinson Core MEG Facility, NIMH Bethesda, MD
http://kurage.nimh.nih.gov/meglab/
A brief introduction to non-linear dynamical measures such as entropy and mutual information will be given, followed by how these can be applied to MEG. Previous MEG studies, using a working memory task (n-back) have shown differences among schizophrenics, their unaffected siblings, and normal subjects in beta-band event related desynchronization in dorsolateral prefrontal cortex and parietal cortex. This agrees closely with findings in fMRI. Symbolic mutual information (SMI) is a pair-wise measure of shared information between brain regions. Applying SMI to the same datasets in a 50-300 Hz bandpass shows the most significant differences in shared information among the groups are found in rostral prefrontal cortex. Furthermore, these results appear to be independent of task or memory work-load. Further studies are needed to determine the sensitivity and specificity of this measure, and to investigate cofactors such as medication and gender differences.
2014-05-15
Role of Mismatch in Neuromorphic Engineering
Sadique Sheik BioCircuits Institute, UC San Diego
http://biocircuits.ucsd.edu/
Neuromorphic analog integrated circuits built to mimic biological spiking neurons and synapses involve large numbers of transistors, capacitors, and other components. Inaccuracies in the fabrication lead to variability in the sizing of these integrated components and their electrical properties, resulting in mismatch, e.g. no two identically designed transistors are truly identical. Transistor mismatch directly impacts the collective dynamics of multiple identically designed neural elements integrated on neuromorphic chips. In this chalk talk I will discuss some of the implications of transistor mismatch and other fabrication induced component variability on neuromorphic engineering, and some of the strategies adopted to tackle such variability. I will further show that some computational models can actually exploit variability to enhance their performance. I will discuss one such model that I have been working on - unsupervised learning of spatiotemporal spike patterns. I will conclude by sharing my thoughts on the kind of computational models that we, as a community, should be working towards, in order to build robust cognitive systems.
2014-05-01
I Thought I Saw it Move: Illusions of Movement
Stuart Anstis UC San Diego, Department of Psychology
http://anstislab.ucsd.edu/
Motion perception has been called one of the most ancient and primitive forms of vision (Walls 1942). Animals depend upon it to catch their next meal, or to avoid being another animal's next meal. We use it every day when we drive, although cars move ten times faster than we can run, yet our motion perception and reaction times have not speeded up to match, and this can lead to death. So it is important to know our perception's can's and cant's.
I have developed a number of new motion illusions. These produce perceptual errors that throw light on the normal processes of motion perception. The illusions vary the Contrast, Context, Size, Object-Parsing, Ambiguity and Retinal Eccentricity of moving objects. In the Footsteps illusion, static background stripes alter the contrast of moving colored squares, which makes their apparent speed vary (think: driving in the fog). In the Flying Bugs illusion, a moving background alters the perceived direction in which circling bugs fly (think: moon appears to sail behind moving clouds). In the Zigzag illusion, drifting random dots appear to move in new directions when we walk toward the screen, showing that Size matters. In the Chopsticks illusion, sliding intersections that are circling clockwise appear to move counterclockwise; and our eyes are quite unable to track this circling movement. I shall also show ambiguous patterns of regularly spaced moving spots, which appear to re-group in real time even though the stimulus remains the same, so we can watch our own visual computations in action. Also, certain moving striped patterns are correctly seen in central vision, but dramatically change their perceived directions when seen eccentrically (out of the corner of your eye). This reveals that the fovea and peripheral retina handle visual motion quite differently. Finally, moving patterns can shift the perceived position of flashed targets, showing interactions in how we see position and motion.
2014-04-17
Passive Brain-Computer Interfaces for Automated Adaptation and Implicit Control in Human-Computer Interaction
Thorsten O. Zander Technische Universität Berlin
http://www.phypa.org/
Over the last 3 decades several means of interaction with Brain-Computer Interfaces (BCIs) have been extensively investigated. While most research aimed at the design of supportive systems for severely disabled persons, over the last decade a new trend has emerged towards applications for the general population. For users without disabilities a specific type of BCIs, that of passive Brain-Computer Interfaces (pBCIs), has shown high potential for improving Human-Machine and Human-Computer Interaction (HCI).
With pBCIs a new type of interaction has emerged, based on implicit control. Implicit Interaction aims at controlling a computer system by behavioral or psychophysiological aspects of user state, independently of any intentionally communicated commands. This introduces a new type of HCI, which in contrast to most currently implemented forms of interaction does not require the user to explicitly communicate with the machine. Users can focus on understanding the current state of the system and developing strategies for optimally reaching the goal of the given interaction. Based on information extracted by a pBCI and the given context, the system can adapt automatically to the current strategies of the user. Principles of Implicit Interaction in pBCI and its applications to HCI are illustrated with results of an EEG-based study to guide simple cursor movements on a 2D grid to a target.
Biography:
Thorsten Zander is trained in mathematics with a focus on mathematical logic, and studied Brain-Computer Interfaces (BCI) in the group of Klaus-Robert Mueller at the Fraunhofer FIRST in Berlin. He currently leads Team PhyPA at the Department for Biological Psychology and Neuroergonomics at the Technical University of Berlin, introducing passive BCI and investigating applications of its means of interaction for healthy users. Among several research collaborations he worked extensively with Scott Makeig at the Swartz Center for Computational Neuroscience investigating cognitive processes underlying passive BCI, and more recently with Bernhard Schoelkopf on new methodologies for passive BCIs.
2014-03-13
Your Eyes Give You Away: Pupillary responses, EEG Dynamics and Applications for BCI
Paul Sadja Laboratory for Intelligent Imaging and Neural Computing
Columbia University
http://liinc.bme.columbia.edu
As we move through an environment, we are constantly making assessments, judgments, and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions -- our implicit "labeling" of the world. In this talk I will describe our work using physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3-D environment.
Specifically, we record electroencephalographic (EEG), saccadic, and pupillary data from subjects as they move through a small part of a 3-D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to those that are labelled. Finally, the system plots an efficient route so that subjects visit similar objects of interest.
We show that by exploiting the subjects' implicit labeling, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers' inference of subjects' implicit labeling. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3-D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user's interests.
Biography:
Paul Sajda is Professor of Biomedical Engineering and Radiology at Columbia University and Director of the Laboratory for Intelligent Imaging and Neural Computing (LIINC). His research focuses on neural engineering, neuroimaging, computational neural modeling and machine learning applied to image understanding. Prior to Columbia he was Head of The Adaptive Image and Signal Processing Group at the David Sarnoff Research Center in Princeton, NJ. He received his B.S. in Electrical Engineering from MIT and his M.S. and Ph.D. in Bioengineering from the University ofPennsylvania. He is a recipient of the NSF CAREER Award, the Sarnoff Technical Achievement Award, and is a Fellow of the IEEE and the American Institute of Medical and Biological Engineering (AIMBE). He is also the Editor-in-Chief for the IEEE Transactions in Neural Systems and Rehabilitation Engineering and a member of the IEEE Technical Committee on Neuroengineering. He has been involved in several technology start-ups and is a co-Founder and Chairman of the Board of Neuromatters, LLC., a neurotechnology research and development company.
2014-02-27
Integration of EEG Source Dynamics in and Across Studies
Nima Bigdely Shamlo Swartz Center for Computational Neuroscience
http://sccn.ucsd.edu
In this talk I present a set of methods that enable the calculation of EEG source dynamics at the subject level and analyses of this information in and across studies. I explore different methods to extract better EEG measures from individual subjects: regression to reduce confounds originated from temporal proximity of cognitive events, optimal low-pass filtering to calculate better ERPs and collaborative averaging to obtain better measures from small numbers of trials. I also introduce two methods for combining source-based EEG information, calculated with ICA and equivalent dipole localization, across subjects in a study: Measure Projection Analysis (MPA) allows study-level analysis for measures, such as ERP and ERSP, that are associated with single brain areas while Network Projection Analysis enables combining network measures, such as effective connectivity, associated with an ordered pair of brain area.
2014-02-20
Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies
Alexander Terekhov Affiliation
+ moreThe brain sitting inside its bony cavity sends and receives myriads of sensory inputs and outputs. A problem that must be solved either in ontogeny or phylogeny is how to extract the particular characteristics within this "blooming buzzing confusion" that signal the existence and nature of physical space, with structured objects immersed in it, among them the agent's body. The idea that spatial knowledge must be extracted from the sensorimotor flow in order to underlie perception has been considered by a number of thinkers, including Helmholtz, Poincare, Nicod, Gibson, etc. However, little work has considered how this could actually be done by organisms without a priori knowledge of the nature of their sensors and effectors. Here we show how an agent with arbitrary sensors will naturally discover spatial knowledge from the undifferentiated sensorimotor flow. The method first involves tabulating sensorimotor contingencies, that is, the laws linking sensory and motor variables. Second, further laws are created linking these sensorimotor contingencies together. The method works without any prior knowledge about the structure of the agent's sensors, body, or of the world. We show that the extracted laws endow the agent with basic spatial knowledge, manifesting itself through perceptual shape constancy and the ability to do path integration. We further show that the ability of the agent to learn all spatial dimensions depends on the ability to move in all these dimensions, rather than on possessing a sensor that has that dimensionality. This latter result suggests, for example, that three dimensional space can be learned in spite of the fact that the retinas are two-dimensional. We conclude by showing how the acquired spatial knowledge paves the way to building the notion of object.
Joint work with J. Kevin O'Regan ERC FEEL Project:
2014-01-30
Non-Equilibrium Thermodynamics
Katja Lindenberg Department of Chemistry and Biochemistry
http://hypatia.ucsd.edu/
A variety of simple model systems provide a theoretical testbed for a thorough characterization of the efficiency of operation of thermodynamic systems at maximum power (i.e., away from equilibrium) and also for the characterization of fluctuations in small thermodynamics systems in a non-equilibrium steady states. These models are particularly attractive because they can be explored analytically. Starting with idealized single quantum dot devices we will present a variety of such systems in a variety of operational modes. Our goal is to understand universal properties beyond the linear response regime.
2014-01-16
Learning And Energetics In Dynamical Systems
Tony Bell Redwood Center for Theoretical Neuroscience, UC Berkeley
http://redwood.berkeley.edu/wiki/Tony_Bell
In this "real" chalk talk I will present new results and work in progress on (1) likelihood-based machine learning in dynamical systems; (2) entropy production in dynamical systems; and (3) possible connections between the three hitherto separate domains of machine learning, dynamical systems and non-equilibrium statistical mechanics. I will also present a survey of the concepts that we need to integrate to create an ambitious synthesis of these fields.
2013-11-21
Towards Clinically Viable Neural Prosthetic Systems
Vikash Gilja Department of Electrical and Computer Engineering, UC San Diego
http://www.ece.ucsd.edu
Brain-machine interfaces (BMIs) translate neural activity into control signals for guiding prosthetic devices, such as computer cursors and robotic limbs, offering disabled patients greater interaction with the world. BMIs have recently demonstrated considerable promise in proof-of-concept animal experiments and in human clinical trials. However, a number of challenges for successful clinical translation remain, including system performance and robustness across time and behavioral contexts.
In this talk I will address these challenges by describing two classes of BMI experiments. For the first class of experiments, I will describe a study with rhesus monkeys and the recent translation of study results to a human participant. In these experiments we record from neurons in motor cortex using chronically implanted electrode arrays and focus on control algorithm design. Through real-time closed-loop BMI experiments we demonstrate methods that increase performance and improve robustness. In the second class of experiments, we develop and verify a set of novel wireless neural recording systems, enabling the study of neural activity for longer time periods and across more complex behaviors.
2013-11-14
Autodigestion: A Basis for Inflammation and Disease
Geert Schmid-Schönbein Department of Bioengineering, UC San Diego
http://microcirculation.ucsd.edu/
There is increasing evidence that markers for inflammation accompany virtually all diseases, including stroke and chronic neuronal and retinal degenerative diseases. Inflammation is fundamentally a tissue repair mechanism, and thus the question arises what is the cause of tissue injury in conditions that lead to inflammation. I will discuss this fundamental question in the case of shock, sepsis and multi-organ failure. Shock kills hundreds of thousands of people each year in the US alone and there is no treatment other than alleviation of symptoms. The markers for inflammation in shock are severe and in short order lead to cell and organ failure. The cause is currently unknown in spite of many ideas put forward, e.g. an involvement of intestinal bacteria and their toxins, secondary products (e.g. cytokines, complement) they generate, depletion of metabolites.
Even old observations and studies indicated that in critically ill patients the intestine plays a central role. Hippocrates states: "Disease begins in the gut". You should ask yourself the question: How is it possible that you can digest for example a sausage, whose skin is made of intestine, you digest this intestine but do not digest your own intestine? How did nature solve this problem?
The powerful digestive enzymes synthesized by the pancreas are transported and fully activated in the intestine as part of normal food digestion. They need to be compartmentalized inside the lumen of the intestine as requirement for normal digestion. Containment of digestive enzymes in the lumen of the intestine is provided by the mucosal barrier. This barrier is made up of a layer of mucin and the intestinal epithelium, and has usually low permeability for digestive enzymes. But should the mucosal barrier break down in shock, the digestive enzymes leak into the wall of the intestine and start an autodigestion process, causing extensive tissue damage. The digestive enzymes also generate small molecular weight cytotoxic mediators, which together with digestive enzymes are transported into the systemic circulation via the portal venous system, the intestinal lymphatics and even through the peritoneum. The mixture of digestive enzymes and their fragments cause cell and organ dysfunction even in remote organs to the point of complete cell death and organ failure. We have demonstrated that blockade of digestive enzymes in the lumen of the intestine in experimental forms of shock serves to reduce breakdown of the mucosal barrier and autodigestion of the intestine, organ dysfunctions and mortality.
2013-10-31
Real-Time Modeling,Classification, and 3D Visualization of Neuronal Source Dynamics and Connectivity using High-Density Wearable EEG
Tim Mullen Swartz Center for Computational Neuroscience, INC, UC San Diego
http://sccn.ucsd.edu/wiki/SIFT
Dynamic cortico-cortical interactions are central to neuronal information processing. The ability to monitor these interactions in real time may prove useful for Brain-Computer Interface (BCI) and other applications, providing information not obtainable from univariate measures, such as bandpower and evoked potentials. Wearable (mobile, unobtrusive) EEG systems likewise play an important role in BCI applications, affording data collection in a wider range of environments. However, reliable real-time modeling of neuronal source dynamics in mobile settings faces challenges, including mitigating artifacts and maintaining fast computation and good modeling performance with limited amount of data. Furthermore, prediction of mental and behavioral states from high-dimensional spatio-spectro-temporal connectivity parameters poses additional challenges. Here we describe recent efforts to address these challenges using novel developments in wearable hardware, signal processing, and machine learning. We hope this will ultimately contribute to the development of EEG as a mobile neuroimaging modality.
2013-10-17
Brain Computer Interface: An Embedded Signal Processing Perspective
Roozbeh Jafari University of Texas at Dallas
http://www.essp.utdallas.edu
Most clinical, wellness, and entertainment applications of BCI require wearable and portable devices. The enhanced wearability of the BCI system, along with the user's comfort and quality of experience play an important role in adopting this new technology for various applications. The next generation of BCI systems will benefit from cross-layered optimization techniques spanning from electrode and analog front-end (AFE) optimization to hardware architecture exploration, signal processing and BCI paradigm development all targeted towards enhancing the system usability. In this talk, we will highlight several techniques developed for electrode optimization and noise reduction using an AFE assisted feedback. We will discuss BCIBench, a benchmarking suite which includes a wide range of algorithms used for pre-processing, feature extraction and classification in BCI applications. We will provide insights into architectural components that can enhance the performance and reduce the power consumption of BCI systems. We will discuss several BCI signal processing techniques that can benefit from tight coupling with the AFE. We will present a number of novel BCI paradigms that enhance the transfer rate over classic paradigms. We will conclude the talk by highlighting the need for system-level and holistic approaches enhancing the performance and the usability of the next generation BCI systems.
Biography:
Roozbeh Jafari is an associate professor at UT-Dallas. He received his PhD in Computer Science (UCLA) and completed a postdoctoral fellowship at UC-Berkeley. His research interest lies in the area of wearable computer design and signal processing. His research has been funded by the NSF, NIH, DoD (TATRC), AFRL, AFOSR, DARPA, SRC and industry (Texas Instruments, Tektronix, Samsung & Telecom Italia). He has published over 100 papers in refereed journals and conferences. He has served as technical program committee chairs for several flagship conferences in the area of Wireless Health and Wearable Computers including the ACM Wireless Health 2012, International Conference on Body Sensor Networks 2011 and International Conference on Body Area Networks 2011. He is an associate editor for the IEEE Sensors Journal and IEEE Internet of Things Journal. He is the recipient of the NSF CAREER award (2012) and the RTAS 2011 best paper award.
2013-10-03
Connecting the dots on the brain initiative
Terry Sejnowski Salk Institute for Biological Studies
Institute for Neural Computation, UC San Diego
http://cnl.salk.edu/
Abstract
2013-06-06
Nanophotonics technology and applications
Shaya Fainman UC San Diego Department of Electrical and Computer Engineering
http://emerald.ucsd.edu/
Various future system applications that involve photonic technology rely on our ability to integrate it on a chip to augment and/or interact with other signals (e.g., electrical, chemical, biomedical, etc.). For example, future computing and communication systems will need integration of photonic circuits with electronics and thus require miniaturization of photonic materials, devices and subsystems. Another example, involves integration of microfluidics with nanophotonics, where former is used for particle manipulation, preparation and delivery, and the latter in a large size array form parallel detection of numerous biomedical reactions useful for healthcare applications. To advance the nanophotonics technology we established design, fabrication and testing tools. The design tools need to incorporate not only the electromagnetic equations, but also the material and quantum physics equations to include near field interactions. These designs are integrated with device fabrication and characterization to validate the device concepts and optimize their performance. Our research work emphasizes the construction of passive (e.g., engineered composite metamaterials, filters, etc.) and active (e.g., nanolasers) components on-chip, with the same lithographic tools as electronics. In this talk, we discuss some of the passive metamaterials and devices that recently have been demonstrated in our lab. These include our most recent results on monolithically integrated short pulse compressor utilized with SOI material platform and design, fabrication and testing of nanolasers constructed using metal-dielectric-semiconductor resonators confined in all three dimensions.
2013-05-23
Nervous Systems From The Bottom Up
Henry Abarbanel Department of Physics, UC San Diego and
Scripps Institution of Oceanography
Methods for transferring information from experiments to models have been given an exact statistical physics setting. Using this framework we analyzed data from experiments on individual neurons. We will discuss ideas for extending this to experiments on networks, now being designed for execution in the Margoliash laboratory at the University of Chicago.
2013-05-09
Extremum Seeking and Learning in Adversarial Networks
Miroslav Krstic Associate Vice Chancellor for Research
Director, Cymer Center for Control Systems and Dynamics
Daniel L. Alspach Endowed Chair in Dynamic Systems and Control
http://flyingv.ucsd.edu/
Extremum seeking (ES) is a method for real-time non-model based optimization, though it can also be viewed as a form of data-based (black box) learning. ES was invented in 1922 but the past decade has actually been its golden age, both in terms of the development of theory and in terms of penetration in industry and into fields outside of control engineering. An extremum seeker is actually a dynamical system whose state is the parameter vector with which the optimization is being conducted. ES researchers work on designing such dynamical systems and on studying their convergence (typically in continuous time, using averaging theory). An extremum seeker uses only the measurement of the performance index (without knowing the functional dependence of the performance index on the parameter vector) and employs perturbation signals - either periodic or stochastic - in the process of learning (similar to "mutations" in genetic algorithms). After a historical overview, I will present recent ES designs that provably converge to Nash equilibria in noncooperative games. As I will illustrate, extremum seeking is a natural way to explain how E. coli or fish seek food - the former using stochastic perturbations and the latter using deterministic perturbations. In other words, ES reverse engineers the feedback algorithms used by such organisms.
2013-04-25
Coupled Brainstem Sensorimotor Oscillators
David Kleinfeld Section of Neurobiology and Department of Physics, UCSD
http://physics.ucsd.edu/neurophysics/
*** NOTE SPECIAL TIME ***
Time: 2:30pm-3:30pm
+ moreWhisking and sniffing are predominant aspects of exploratory behaviour in rodents. Yet the neural mechanisms that generate and coordinate these and other orofacial motor patterns remain largely uncharacterized. We use anatomical, behavioural, electrophysiological and pharmacological tools to show that whisking and sniffing are coordinated by respiratory centres in the ventral medulla. We delineate a distinct region in the ventral medulla that provides rhythmic input to the facial motor neurons that drive protraction of the vibrissae. Neuronal output from this region is reset at each inspiration by direct input from the pre-Bötzinger complex, such that high-frequency sniffing has a one-to-one relationship with whisking, whereas basal respiration is accompanied by intervening whisks that occur between breaths. We conjecture that the respiratory nuclei, which project to other premotor regions for oral and facial control, function as a master clock for behaviours that coordinate with breathing. Work with Martin Deschenes and Jeffrey Moore.
2013-04-11
Cortical Dynamics Of Word Understanding
Eric Halgren Department of Neurosciences, and
Multimodal Imaging Laboratory, UCSD
http://mmil.ucsd.edu/
Despite 150 years of scientific investigation, fundamental issues in word understanding remain unresolved. For example: Is acousto-phonetic processing affected by the lexico-semantic context (i.e., does expecting a particular word bias how we transform a sound into phonemes)? Do written words have to be re-coded phonologically before lexical access (i.e., do we have to mentally sound-out a word before we can understand it)? Does lexical access precede semantic encoding (i.e., do we first have to know what word it is before we can access its meaning)? These questions critically concern the dynamics of neural information processing, which can be observed non-invasively with magnetoencephalography (MEG), as well as invasively with local field potential and single unit recordings in patients. I will argue that these data indicate that the answers to the questions posed above are: No, Maybe, and No.
2013-03-28
Cerebellar Prediction and Learning Mechanisms and Implications
Sascha du Lac Systems Neurobiology, Salk Institute
http://www.snl-d.salk.edu
Success in a complex world requires learning, prediction, and action. The cerebellum of humans and other vertebrate animals contains over half of the brain's neurons, which are devoted to optimizing prediction and action over rapid timescales (< 500 msec). Remarkably, this vast computational power influences the rest of the brain solely via convergence of cerebellar Purkinje cell inhibitory synapses onto a relatively tiny number of neurons in downstream cerebellar and vestibular nuclei. In this seminar, I will discuss surprising new findings from our laboratory and others about microcircuits and mechanisms responsible for dynamically adaptive cerebellar control of cognition, physiological regulation, and movement.
2013-03-14
Characterizing Neural Feature Selectivity And Invariance Using Natural Stimuli
Tatyana Sharpee Computational Neurobiology Laboratory
Helen McLoraine Developmental Chair in Neurobiology
Salk Institute for Biological Studies
http://cnl-t.salk.edu
In this talk I will describe a set of computational tools for characterizing responses of high level sensory neurons. The goal is to describe in as simple as possible ways how the responses of these neurons signal the appearance of conjunctions of different features in the environment. The focus will be on computational methods that are designed to work with stimuli derived from the natural sensory environment. Some of the new methods that I will discuss characterize neural feature selectivity while assuming that the neural responses exhibit a certain type of invariance, such as position invariance for visual neurons. Other methods do not require one to make an assumption of invariance, and instead can determine the type of invariance by analyzing relationship between the multiple stimulus features that affect the neural responses. I will discuss the relative advantages and limitations of these computational tools and illustrate their performance using model neurons as well as recordings from the visual system.
2013-02-28
Locomotion, Perception, and Neurorobotic Models
Anthony Lewis Qualcomm Inc.
http://www.qualcomm.com
Behavior is an expression of the interaction between the body, the brain and the environment. Neurorobotics provide a tool that can be used to model this interaction. In the neurorobotic paradigm, a biologically plausible model acts through a robotic body to interact with the world.
In this talk I will explore several themes centered around locomotion: Generation of locomotion using spiking neurons, learning to walk using global and local cost functions, incorporation of vision including stereopsis and optic flow to guide locomotion. I end with a presentation of a physical model of the lower limbs of a human including both mono-articulate and biarticular (acting on two joints) muscles, as well as load and position sensory feedback. This robot demonstrated how a relatively small network of spiking neurons and biologically realistic dynamics could yield remarkably human like gait.
2013-02-14
Predictive Modeling Physiological Ssytems: From Single Cells To Whole Brains and Back
Joaquin Rapela Swartz Center for Computational Neuroscience, INC, UCSD
http://sccn.ucsd.edu/~rapela/
Most existing techniques to characterize physiological systems from input/output data use simplistic models and estimate their parameters from mathematical convenient, but behaviorally not very relevant, inputs. However, recent increases in computational power and advances in statistics make now possible new techniques that use more complex models and estimate their parameters from richer stimuli. In this talk I will describe two such techniques. I will first introduce the Extended Projection Pursuit Regression algorithm (ePPR, Rapela et al. 2010) for the nonlinear characterization of response properties of single cells from high-dimensional stimuli with naturalistic (and correlated) statistics. I will present new results showing that ePPR reveals, for the first time, quadrature pairs of inhibitory filters in the responses to natural images of cortical complex cells in cat primary visual cortex. Most techniques to characterize single cells are predictive (i.e., the quality of the estimated models is determined by how well they can predict cell responses). However, the majority methods to characterize EEG data are NOT predictive, in spite of the rich behavioral data that could be predicted in EEG experiments. In the second part of this talk I will present the results of using a predictive technique, similar to ePPR, to characterize the brain dynamics of humans performing an audio-visual target-detection task. These results show a very high correlation between subjects behaviors (both error rates and reaction times) and modulation of alpha activity (both amplitude and phase) accounted by the predictive model.
This finding has interesting implications. Scientifically, it adds new supportive evidence to recent research on the link between alpha rhythms and behavior [Mathewson et al. 2011], and to recent theories relating alpha synchronization to top-down inhibitory control [Klimesch et al. 2007]. Methodologically, the non-linear, multi-variate and predictive model used in this work opens a new way to analyze EEG data and contributes a strong example to late applications of multi-variate predictive models to analyze EEG data [Pernet et. al 2011]. In addition, this finding has translational applications, since alpha power could be modulated as predicted by the model to improve subjects behaviors (using SSVEP or rTMS, as shown by [Mathewson et. al 2010] and [Hamidi et. al 2009], respectively).
Biography: Joaquin Rapela completed his undergraduate degree in Computer Science at the University of Buenos Aires, Argentina. After working at the IBM Almaden Research Center, San Jose, CA, as a Staff Software Engineer, he completed his PhD in Electrical Engineering at the University of Southern California. There he developed and applied signal processing tools to characterize responses of visual cells. He was jointly advised by Prof. Norberto Grzywacz (Neuroscience) and Prof. Jerry Mendel (Engineering). Since November 2010 Joaquin is working at the Swartz Center for Computational Neuroscience characterizing the brain dynamics of attention with EEG and those related to eye movements with EEG and eye tracking.
2013-01-17
Solving The Forward And Inverse Problem In EEG Source Analysis
Zeynep Akalin Acar Swartz Center for Computational Neuroscience, INC, UCSD
http://sccn.ucsd.edu/~zeynep/
Localization of the brain activities using EEG measurements is called electric source imaging (ESI). ESI is important for both clinical and in basic brain research. The solution of the scalp potentials for a specific dipole configuration is the forward problem of ESI. Complementarily, the inverse problem is the localization of the sources based on the measurements and the calculations. The three most important components of a successful source localization approach are: (a) an electric forward head model for the subject, (b) a ('source space') model of possible source locations, and (c) an inverse source localization method. In this talk, I will give brief definitions of forward and inverse EEG problem solutions and present our simulation studies based on realistic individual subject forward head models to investigate source localization errors produced by inaccuracies introduced by use of template head models, inaccurate skull conductivity estimates, imprecise electrode co-registration, and low electrode numbers. Results show that when individual subject MR head images are not available to construct subject-specific head models accurate EEG source localization should employ a four- or five-layer BEM template head model incorporating an accurate skull conductivity estimate and warped to 64 or more accurately 3-D measured and co-registered electrode positions.
2012-11-29
Decisions, Decisions, Decisions!
Angela Yu Department of Cognitive Science, UCSD
http://www.cogsci.ucsd.edu/~ajyu
Decision theory is a powerful formal framework for understanding how noisy inputs can be translated into concrete actions. Using tools from Bayesian statistical inference and stochastic control theory, my work has shown that many behavioral and neural phenomena in perception, action, and cognition can be understood as rational decision-making by the brain at different timescales and levels of abstraction. In this talk, I will give an overview of my modeling and experimental work that uses decision-theoretic concepts to understand the formal link between neurophysiology and behavior in perception, attention, inhibitory control, and action planning.
2012-11-15
Neural Dynamics of BEAT Perception
John R. Iversen The Neurosciences Institute
http://www.nsi.edu/~iversen/
Our perceptions are jointly shaped by external stimuli and internal interpretation. The perceptual experience of a simple rhythm, for example, strongly depends upon its metrical interpretation (how one hears the basic beat or 'pulse' of they rhythm). This pulse is endogenously generated and has important consequences for perception, underlying a fundamental mode of temporal perception. The beat sets the origin and timescale for the perception of rhythm, and has strong cognitive advantages for recognition and recall of patterns. Interestingly, the internal pulse is not uniquely determined by the input stimulus, and instead can be altered at will, providing a model of the voluntary cognitive organization of perception. Where in the brain do the bottom-up and top-down influences in rhythm perception converge? Is it purely auditory, or does it involve other systems? I will present ongoing work aimed at understanding the neural mechanisms responsible for beat perception and metrical interpretation. In one experiment, we measured brain responses as participants listened to a repeating rhythmic phrase, using magnetoencephalography. In separate trials, listeners were instructed to mentally impose different metrical organizations on the rhythm by hearing the downbeat at one of three different phases in the rhythm. The imagined beat could coincide with a note, or with a silent position (yielding a syncopated rhythm). Since the stimulus was unchanged, observed differences in brain activity between the conditions should relate to active rhythm interpretation. Two effects related to endogenous processes were observed: First, sound-evoked responses were increased when a note coincided with the imagined beat. This effect was observed in the beta range (20-30 Hz), consistent with earlier studies. Second, and in contrast, induced beta responses were decoupled from the stimulus and instead tracked the time of the imagined beat. The results demonstrate temporally precise rhythmic modulation of brain responses that reflect the active interpretation of a rhythm. In discussion will will consider our work in light of 'motor theories' of perception that posit a kind of analysis by synthesis. In the case of rhythm there is converging evidence for premotor activity when listening to rhythms with a beat in absence of overt movement, suggesting a role for 'covert action' in shaping our perceptions of timing in sound.
2012-11-08
Joint D evelopment of Perception and Active Eye Movements
Bertram Shi Department of Electronic and Computer Engineering and Division of
Biomedical Engineering Hong Kong University of Science and Technology
Clear Water Bay, Kowloon, Hong Kong
http://www.ee.ust.hk/~eebert/
Rather than explicitly programming a robot, might it be possible to seed a robot with a minimal structure and allow it to learn how to behave in the environment intelligently, much in the same way a baby develops? As a first step towards such a system, we must have models of the development of perception, the robot's internal representation of the environment within the robot based on its sensory input, and the development of behavior, the generation of intelligent actions based upon the perceived environment. Past work has studied these two problems in isolation. For example, it has been shown that a developmental algorithm based on sparse coding can account for the shape of receptive fields of visual neurons in the mammalian brain. Reinforcement learning has been used to model the development of behavior. However, this isolated viewpoint ignores the fact that behavior and sensory perception are mutually dependent. Sensory perception drives behavior, but behavior can also influence the development of sensory perception, by altering the statistics of the sensory input. Thus, there is a "chicken-and-egg" problem as to which arises first. Indeed, it is likely that they develop simultaneously. But how should these two learning processes interact? What constraints do we need to put into place to ensure that the learning succeeds in generating intelligent behavior? I will describe joint work with Jochen Triesch at the Frankfurt Institute of Advanced Study, which addresses these problems by modeling the joint development of visual perception and the control of eye movements. In particular, I describe our work in modeling the interaction between development of the neural representation of binocular disparity and the development of a binocular vergence eye movements control policy to maintain fixation.
Bio: Bertram E. Shi received the B.S. and M.S. degrees in electrical engineering from Stanford University, Stanford, CA, USA in 1987 and 1988. He received the Ph.D. degree in electrical engineering from the University of California, Berkeley, CA, USA in 1994. He then joined the faculty of the Department of Electronic and Computer Engineering at the Hong Kong University of Science and Technology, Kowlooon, Hong Kong. He is currently a Professor in the ECE department and the Division of Biomedical Engineering. His research interests are in bio-inspired signal processing and robotics, neuromorphic engineering, computational neuroscience, machine vision, image processing, and hardware implementations of neural networks. Prof. Shi is an IEEE Fellow and has twice served as Distinguished Lecturer for the IEEE Circuits and Systems Society. He is an Associate Editor for the IEEE Transactions on Biomedical Circuits and Systems, as well as the Frontiers in Neuromorphic Engineering.
2012-11-01
BCILAB and applications to EEG cognitive interfaces
Christian Kothe Swartz Center for Computational Neuroscience
Institute for Neural Computation, UCSD
http://sccn.ucsd.edu/wiki/BCILAB
Chalk Talk Video #1 https://www.youtube.com/watch?v=w8Z3b_aftco
Chalk Talk Video #2 https://www.youtube.com/watch?v=YUB0vxNmm2w
With an increasingly deep understanding in neuroscience as well as disciplines such as statistical inference and optimization happening in parallel to rapid progress in sensor engineering and high-performance, yet low-cost, computation comes the ability to interface the human nervous system to the world of machines. In this chalk talk I will discuss the BCILAB toolbox, a MATLAB toolbox for the rapid design, prototyping and evaluation of EEG-based brain-computer interfaces and other types of cognitive interfaces, which at present is one of the most comprehensive such system in terms of number of methods implemented. Some of its key design choices and features will be explained in detail, as will be a small selection of state-of-the-art algorithms and applications enabled by those algorithms under favorable conditions. I conclude with a brief overview of the larger ecosystem in which BCILAB exists, including our new multi-modal data acquisition platform known as the lab streaming layer and with an outlook into future directions, such as the expansion into online connectivity measures and motion analysis via the SIFT and MoBILAB toolboxes, respectively.
Host: Scott Makeig
2012-10-25
Laminar Cortical Dynamics Of Visual Perception, Attention, Recognition, And Consciousness
Steve Grossberg Wang Professor of Cognitive and Neural Systems
Center for Adaptive Systems, Center for Computational Neuroscience and
Neural Technology, and Departments of Mathematics, Psychology, and
Biomedical Engineering
Boston University, Boston, MA 02215
steve@bu.edu
http://cns.bu.edu/~steve
Time:
Seminar: 12:30pm-1:30pm
Chalk talk/Q&A session: 1:30pm-2pm
Sponsor: Institute for Neural Computation Chalk Talk Series, and Temporal Dynamics of Learning Center Seminar Series
There has been a great deal of theoretical progress in clarifying how brains give rise to minds. This progress is illustrated by two new computational paradigms: Complementary Computing clarifies the nature of global brain specialization, whereas Laminar Computing clarifies why all neocortical circuits use variants of a shared layered architecture. Recent models of 3D vision and figure-ground separation, speech perception, and cognitive working memory and unitization all use variants of this laminar design. The talk will outline function roles of identified cells in visual cortex that help the brain to see. It will propose functional links that occur during category learning between brain processes of consciousness, learning, expectation, attention, resonance, and synchrony, and supportive behavioral and neurobiological data. The talk will suggest how a hierarchy of laminar cortical regions interact with specific and nonspecific thalamic regions during category learning using spiking dynamics, STDP, local field potentials, and synchronous oscillations. It will then propose how the brain learns to bind multiple views of an object into a view-invariant object category while scanning a scene with eye movements. In particular, how does the brain avoid the problem of erroneously binding views of different objects together during unsupervised learning conditions, and how do the eyes scan multiple object views even before we know what it is? This analysis predicts how processes of spatial attention, object attention, category learning, figure-ground separation, and predictive remapping in cortical areas V1, V2, V3A, V4, ITp, ITa, PPC, LIP, and PFC interact during invariant object category learning.
Hosts: Gary Cottrell and Gert Cauwenberghs
2012-10-18
Synthesizing Cognition In Neuromorphic VLSI Systems
Emre Neftci Integrated Systems Neuroengineering Laboratory, and
Institute for Neural Computation
UCSD
The hallmark of cognitive behavior is the ability to make economically advantageous choices based not only on immediately available data, but also on the longer time-scale context in which the choice is embedded. In this chalk talk, I will present a method for specifying such behaviors on a physical substrate of inherently imprecise and noisy neuromorphic VLSI circuits. The method casts the target behavior as a "soft" state-machine that is configured on an abstract, computational layer, composed of subnets of spiking neurons. The neuronal subnets are recurrently connected and thereby able to support reliable processing through active gain, signal restoration, and multistability. The desired states and transitions of the high-level behavior can be easily programmed into the computational layer by introducing only sparse connections between some neurons of the various subnets. This abstract layer is realized on the hardware substrate of silicon neuron circuits using a mapping between the parameters of the layer's model neurons, and the bias voltages of the underlying analog-digital electronic circuits. The configuration method is applied to a real-time CMOS VLSI neuromorphic system that performs task-dependent classification of motion patterns contained in the spike-event data generated by a silicon retina.
2012-06-21
2012-06-07
Affective neuroscience: meta-analytic findings
Tarik S Bel-Bahar Swartz Center for Computational Neuroscience
http://sccn.ucsd.edu
We will begin with a brief review of major psychological models of emotion and their implications for cognitive-affective neuroscience. We will then move on to the primary findings from multiple recent brain imaging meta-analyses related to emotion, reward, emotional faces, and pain/empathy. The last half of the talk will consist of an open-ended discussion of the implications of the meta-analytic findings for research and theory.
2012-05-24
How To Build A Synapse From Molecules, Membranes, And Monte Carlo Methods
Tom Bartol Computational Neurobiology Laboratory
The Salk Institute for Biological Studies
http://cnl.salk.edu
Biochemical signaling pathways are integral to the information storage, transmission, and transformation roles played by neurons in the nervous system. Far from behaving as well-mixed bags of biochemical soup, the intra- and inter-cellular environments in and around neurons are highly organized reaction-diffusion systems, with some subcellular specializations consisting of just a few copies each of the various molecular species they contain. For example, glutamtergic synapses at dendritic spines in area CA1 hippocampal pyramidal cells contain perhaps 100 AMPA receptors, 20 NMDA receptors, 10 CaMKII complexes, and 5 free Ca++ ions in the spine head. Much experimental data has been gathered about the neuronal signaling pathways involved in processes such as synaptic plasticity, especially recently, thanks to new molecular probes and advanced imaging techniques. Yet, fitting these observations into a clear and consistent picture that is more than just a cartoon but rather can provide biophysically accurate predictions of function has proven difficult due to the complexity of the interacting pieces and their relationships. Gone are the days when one could do a simple thought experiment based on the known quantities and imagine the possibilities with any degree of accuracy. This is especially true of biological reaction-diffusion systems where the number of discrete interacting particles is small, the spatial relationships are highly organized, and the reaction pathways are non-linear and stochastic. Here I will present how biophysically accurate computational experiments performed on cell signaling pathways can be a powerful way to study such systems and can help formulate and test new hypotheses in conjunction with bench experiments. MCell is a Monte Carlo simulator designed for the purpose of simulating exactly these sorts of cell signaling systems. I will introduce fundamental concepts of cell signaling processes in the organized and compact spaces of synapses, and the insights that can be gained through building realistic models of neurotransmission.
2012-05-10
The "B" of BCIs: How Cognitive Neuroscience Matters With P300 and Other BCIs
Brendan Allison Laboratory of Brain-Computer Interfaces
TU Graz, Austria
http://bci.tugraz.at/
In a (classically defined) brain-computer interface (BCI), users must perform voluntary mental tasks that each entail distinct patterns of brain activity. Hence, for cognitive neuroscientists, ongoing challenges include identifying, modifying, and testing these mental tasks. This talk will discuss this general challenge and then describe specific examples with one common type of BCIs called the P300 BCI.
2012-04-26
Spatial Processing and Map Learning in the Entorhino-Hippocampal Circuit
Stefan Leutgeb Section of Neurobiology
Division of Biological Sciences
http://biology.ucsd.edu/faculty/sleutgeb.html
My laboratory is interested in identifying neuronal mechanisms for long-term memory storage at the systems level. Because specialized hippocampal circuitry is necessary for many forms of memory, we investigate the computations that are performed in a local circuitry that consists of entorhinal inputs to hippocampus and hippocampal outputs to entorhinal cortex. In particular, our research asks which mechanisms generate hippocampal spatial firing patterns and how spatial firing patterns contribute to spatial memory. The input layers of the medial entorhinal cortex to hippocampus contain many cell types with precise spatial firing patterns, including cells with grid-like spatial firing patterns (i.e., grid cells). We found that silencing the neuronal activity in the medial septal area abolishes theta oscillations and grid-like firing patterns in entorhinal cortex. Even though precise spatial and temporal firing patterns in entorhinal cortex and hippocampus are disrupted, we found that the spatial firing patterns of hippocampal cells are partially retained after septal inactivation. We therefore asked whether septal input to entorhinal cortex is particularly important for generating new spatial maps of the environments. We find that the formation of new spatial maps is disrupted to a substantially larger extent than the retention of familiar maps. These findings have important implications for understanding how neurodegenerative processes in the entorhinal cortex can result in a failure to appropriately organize neuronal activity and synaptic plasticity, and thus in the memory problems that are characteristic for Alzheimer's disease.
2012-04-12
2012-03-29
Spatiotemporal Modeling Of Cortical Source Dynamics And Interactions During Epileptic Seizure
Tim Mullen Swartz Center for Computational Neuroscience
Institute for Neural Computation, UCSD
http://www.antillipsi.net/
Mapping the dynamics and spatial topography of brain source processes critically involved in initiating and propagating seizure activity is critical for effective epilepsy diagnosis, intervention, and treatment. In this report we analyze neuronal dynamics before and during epileptic seizures using adaptive multivariate autoregressive (VAR) models applied to maximally-independent (ICA) sources of intracranial EEG (iEEG, ECoG) data recorded from subdural electrodes implanted in a human patient for evaluation of surgery for epilepsy. We examine the spatial distribution on the cortical surface of causal sources and sinks of ictal activity using a novel combination of multivariate Granger-causality and graph theoretic metrics, and distributed multi-scale source localization using Sparse Bayesian Learning. Evidence from this analysis reveals multiple distinct ictal stages corresponding to shifts in inter-component spatiotemporal dynamics and connectivity structure in or near clinically-identified epileptic foci before, during, and following seizures.
2012-03-15
Show Me Your Poker Face: Recognizing Emotional Expressions
Jessie Peissig Department of Psychology, California State University, Fullerton
http://psych.fullerton.edu/jpeissig/
I will discuss a database of faces showing genuine emotional expressions that we have collected and are working on validating. I will also present two studies that have used those faces, including one study comparing males and females and a second study looking at poker players.
Host: Gary Cottrell
2012-03-01
Movement Matters: Investigating Eye Movements And Dyspraxia In Autism
Leanne Chukoskie Computational Neurobiology Laboratory, Salk Institute
http://www.snl.salk.edu/~leanne/
The literature on looking behavior of individuals with autism is extensive, as is the literature on spatial attention differences in autism. Yet, we lack an understanding of the way in which lower level visual, motor and attentional mechanisms contribute to the biases in looking behavior often observed in individuals with autism. Similarly, although there is evidence for deficits in overall motor coordination in autism, this work has not been extended to include eye movement. To our knowledge, there have been no attempts to compare motor control of eye movement with gross motor coordination and ability to perform skilled gestures (praxis). These functions are of particular developmental importance, as early sensory and motor abilities provide a scaffold for higher level skills such as social communication. If eye movements are inaccurate or slow, social information is lost along with the opportunity to learn from that particular social situation.
Using a battery of tasks, we studied the interactions among eye movement, visual motor integration, visual perception and both fine and gross motor skills. We examined associations between various aspects of the tasks to test whether atypical looking behavior observed in natural settings might be affected by fundamental visual motor deficits. We tested children with autism spectrum disorders (ASD) and typically developing age- and performance IQ-matched school-aged children who were recruited from an existing sample of children enrolled in studies of neural and cognitive development.
I will describe the significant group differences we found in several tasks as well as correlations in performance across eye and body movement, as well as in perceptual tasks. Taken together, these results suggest a fresh perspective that may explain some of the difficulties observed with eye contact and visual search often found in individuals with ASD.
2012-02-16
Control of dynamics of excitable networks
Jianxia Cui BioCircuits Institute, UCSD
http://biocircuits.ucsd.edu/
The spatiotemporal dynamics of neuronal systems remains a challenging and important topic in theoretical neuroscience. To understand complex dynamics, it is necessary to start from controllable systems, such as excitable chemical systems and small neuronal networks. In my talk, I will begin with the control of spatiotemporal dynamics of photosensitive Belousov-Zhabotinsky (BZ) systems. Due to their amenability to experimental control and theoretical analyses, photosensitive BZ systems have been serving as ideal model systems in advancing our understanding of complex networks. I will then cover the dynamics of two-neuron networks consisting of one spiking biological neuron and one computational model neuron coupled via dynamic clamp. I will introduce phase response curves (PRCs) that are used to analyze and predict dynamics of these small networks. Finally, I will propose an experimental design to map the synaptic connections among different types of neurons in real neuronal networks on neocortical slices, based on measured spatiotemporal dynamics. In the proposed study, two-photon laser scanning microscopy will be used to record cellular calcium dynamics of the networks, which will be controlled by 2-photon photostimulation uncaging techniques.
Host: Gabriel Silva
2012-01-26
Probing Epilepsy In Human Cortex With Delayed Differential Equations
Claudia Lainscsek CNL, Salk Institute, and
Institute for Neural Computation, UCSD
http://cnl.salk.edu/
Time series analysis with nonlinear delay differential equations (DDEs) is a very powerful tool since it reveals spectral as well as topological properties of the underlying dynamical system. Here DDEs are used to identify different regimes in ECoG (Electrocorticography) data. Electrocorticography is the practice of using electrodes placed directly on the exposed surface of the brain to record electrical activity from the cerebral cortex. ECoG is currently considered to be the "gold standard" for defining epileptogenic zones in clinical practice. A general form for the DDEs relates the derivative at a data point to previous data points of the signal. The linear terms of such a DDE correspond to the main frequencies in the signal. For n independent frequencies in the signal 2n − 1 linear terms are needed. The nonlinear terms in the DDE are related to nonlinear couplings between the harmonic signal parts. DDEs can also be re-written as functions of dynamical higher order data correlations. These dynamical higher order data correlations can be seen as generalizations of Nth order data moment functions such as e.g. the auto-correlation (2nd order moment) and the bi-correlation (3rd order moment). Comparing both versions of higher order data correlations can reveal useful information when analyzing non-linear data. The DDE framework therefore can be seen as a time-domain analysis tool akin to Fourier analysis that is highly robust against noise contamination and computationally fast.
In multichannel epilepsy ECoG data the nonlinear parts of the signal are of special interest. A simple nonlinear two-term DDE can be used to reliably identify artifacts as well as seizures by a large model error and be clearly distinguished by applying ICA to the DDE outputs. Such an analysis can also reveal the seizure onset channels of each seizure. The DDE ouputs further show the three different stages present while a seizure is happening, and post-seizure states.
Host: Peter Rowat
2012-01-19
Sparse Non-Linear Denoising Of fMRI Data
Lars Kai Hansen Director, THOR Center for Neuroinformatics
Head of Section Cognitive Systems
DTU Informatics, Technical University of Denmark
http://www.imm.dtu.dk/~lkh
Location:
Swartz Center for Computational Neuroscience
SDSC East Building, EB185, UCSD
We investigate non-linear denoising of functional brain images
by kernel principal component analysis (kernel PCA). The main challenge is
the mapping of denoised feature space points back into input space, also
referred to as "the pre-image problem". Since the feature space mapping is
typically not bijective, pre-image estimation is inherently illposed. In
many applications, including functional magnetic resonance imaging (fMRI)
data it is of interest to denoise a sparse signal. To meet this objective
we investigate sparse pre-image reconstruction by a Lasso type
regularization. We find that sparse estimation provides better brain state
decoding accuracy and a more reproducible pre-image. These two important
metrics are combined in an evaluation framework which allow us to optimize
both the degree of sparsity and the non-linearity of the kernel embedding.
The latter result provides evidence of signal manifold non-linearity in
the specific fMRI case study.
TJ Abrahamsen, LK Hansen. Sparse non-linear denoising: Generalization performance and pattern reproducibility in functional MRI. Pattern Recognition Letters 32(15):2080–2085 (2011).
PM Rasmussen, TJ Abrahamsen, KH Madsen, LK Hansen. Nonlinear denoising and analysis of neuroimages with kernel principal component analysis and pre-image estimation. NeuroImage, in minor revision (2012).
Host: Scott Makeig
2011-12-08
Cognitive information dynamics
Mikhail Rabinovich UCSD BioCircuits Institute
http://biocircuits.ucsd.edu/rabin/
"Cognitive information dynamics"
The analysis of the temporal evolution of brain information is crucially important for the understanding of higher cognitive mechanisms in normal and pathological states. From the perspective of information dynamics, we will discuss working memory capacity, binding phenomena and some other functions of brain activity. In contrast with the classical description of information theory, brain information dynamics deals with problems such as the stability/instability of information flows, their quality, the timing of sequential processing, the top-down cognitive control of perceptual information, and information creation. In this framework, different types of information flow instabilities correspond to different cognitive disorders. On the other hand, the robustness of cognitive activity is related to the control of the information flow stability. We discuss these problems using experimental, computational and theoretical approaches, and we argue that cognitive activity is better understood considering information flows in the phase space (in contrast to physical–brain space) of the corresponding dynamical model. In conclusion we will consider some engineering applications.
2011-11-17
Learning-Dependent Modification Of Auditory Responses Across Forebrain Networks
Tim Gentner UCSD Dept. of Psychology, and Neurosciences Graduate Program
http://gentnerlab.ucsd.edu/
Sensory systems are preferentially biased to process natural signals that are most likely to carry relevant information. These biases are achieved through the hierarchical representation of increasingly high-dimensional stimulus features, and the learning-dependent association of specific features with specific behavioral goals. Surprisingly little is known about these processes at either the circuit or cellular level in the auditory system. I will discuss the coding of natural vocalizations across multiple auditory forebrain regions in a species of songbird, European starlings. I will propose a canonical cortical circuit, modified by learning, that combine behaviorally relevant and irrelevant signals to produce behaviorally informative representations in single neurons. At higher levels in the auditory system, acoustic features of natural signals that inform learned behavioral goals are coded with increased fidelity in the population correlation structure.
2011-11-03
Feedback Model Of Visual Perceptual Learning
Samat Moldakarimov Computational Neurobiology Laboratory
Salk Institute for Biological Studies
http://cnl.salk.edu
Perception of visual stimuli improves with practice. Specificity of the improvements for stimulus features suggested an early cortical site of neural adjustments, where receptive fields are small. However, neural changes in the primary visual cortex (V1) that may underlie visual perceptual learning are still unclear. Unlike perceptual learning in other sensory modalities, stimulus preferences of V1 neurons did not alter due to perceptual learning: V1 neurons responded preferably to the same stimuli as before learning. Reports on size changes of receptive fields in V1 neurons were also controversial: One study reported that the receptive fields of V1 neurons did not alter due to learning but another study found smaller receptive fields after learning.
Previously suggested models of visual perceptual learning based on plasticity in recurrent connections among V1 neurons failed to explain the observed stability of stimulus preference in V1 neurons, and also could not resolve contradictions between two studies. Here we present a model of visual perceptual learning, in which interaction between V1 and higher cortical areas is a critical feature. We show in the model that learning results in changes in V1 neurons due to stronger feedback inputs from higher cortical areas. Perceptual learning in our model occurs without altering stimulus preferences of V1 neurons, as was observed in experiments. The model also resolves controversies observed in visual perceptual learning experiments and makes testable predictions.
2011-10-20
EEG In An Immersive Virtual Environment With Free Movement: Object Recognition And Theta Auto-Correlation
Joe Snider Poizner Laboratory, Temporal Dynamics of Learning Center, Institute for Neural Computation
+ morePeople navigate novel, complex environments on a daily basis, and they are able to quickly and efficiently form representations that allow for accurate navigation and interaction. In this study we are particularly interested in the full behavior of subjects when navigating a novel environment. To perform the experiment, the subjects donned virtual reality gear, a headset, inertial orientation monitor, and real-time optical tracking, and a 64 channel EEG cap. They entered a virtual room containing a rich set of objects, which matched the dimensions of the real room (~15'x20') in which they were freely moving about. The experiment was done in two sessions over two consecutive days. On day one after entering the virtual environment, subjects first freely explored the environment unsupervised for 10 minutes with no instruction except "explore the environment." Then, for five subsequent trials, opaque bubbles were placed around 39 objects in the room and the subjects walked up to one bubble at a time (indicated by turning green) and popped it by touching it with their hand to see the object hidden underneath. In part as a cover task, they indicated on a variable slider the "interest" they had in the object. On day two, there was no free exploration, but the subjects were presented with bubbles to pop, and after popping each bubble and seeing the object, the subjects indicated how certain they were the object was the same one that had been there on day one by adjusting a slider. Of the 39 total objects a random 13 were changed by rearranging their positions.
Behaviorally, subjects correctly identified from 70%-96% of the object changes. Strikingly, during the walking itself, we observed correlations of the theta wave recorded over midline frontal, central and parietal areas with the allocentric position of the subject in the room. These eeg signals may represent a high level combination of hippocampal navigation-related cells with parietal cortex-related signals. These navigation related correlations from the first day were then correlated with the behavior on the second day: stronger spatial correlations on day one corresponded to better memory performance on day two.
Host: Howard Poizner
2011-10-06
Sparse Coding Networks And Compressed Sensing In Neural Systems
Christopher Rozell Georgia Institute of Technology
http://users.ece.gatech.edu/~crozell/
Many recent results in the signal processing community have shown that signal models based on low-dimensional geometric structure such as sparsity (or manifolds) can be very powerful for many applications. For example, it is clear now that a whole host of inverse problems can be solved more effectively by taking advantage of this structure, with the recent example of compressed sensing (i.e., recovering signals from highly undersampled incoherent measurements) gaining significant attention. Interestingly, neural coding hypotheses based on these same sparse signal models have demonstrated an ability to account for observations such as receptive field properties in sensory systems. In this talk I will discuss our previous work on implementing sparse coding models in biophysically plausible architectures. We will show that beyond simply accounting for receptive field structure, these networks can account for observed response properties of V1 cells. Specifically, I will highlight our recent results showing that these models can account for a wide variety of non-classical receptive field effects reported in V1. I will also highlight our preliminary results and ongoing work drawing connections between neural computation and the results of compressed sensing. In particular, we will briefly discuss our contributions to the compressed sensing literature that can be used in conjunction with sparse coding networks to model two distinct systems: communication bottlenecks in sensory pathways (e.g., the optic nerve) and recurrent networks for high-capacity sequence memory.
Biography: Dr. Christopher Rozell is an Assistant Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. Dr. Rozell received a B.S.E. in Computer Engineering and a B.F.A. in Performing Arts Technology (Music Technology) in 2000 from the University of Michigan. He attended graduate school at Rice University where he was a Texas Instruments Distinguished Graduate Fellow, receiving the M.S. and Ph.D. in Electrical Engineering in 2002 and 2007, respectively. He spent the summer of 2002 as a researcher at MIT Lincoln Laboratory, and following graduate school was a postdoctoral research fellow in the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley. Dr. Rozell joined the Georgia Tech faculty in July 2008, where he is affiliated with the Laboratory for Neuroengineering and the Center for Signal and Image Processing. His current research interests include constrained sensing systems, sparse representations, statistical signal processing, and computational neuroscience.
Host: Todd Coleman, tpcoleman@ucsd.edu
2011-09-22
A Team Decision Theory Approach To The Design Of Brain-Machine Interfaces
Todd Coleman Department of Bioengineering, UCSD
http://coleman.ucsd.edu/
In this presentation, we espouse an interpretation of brain-machine interfaces as two agents cooperating to achieve a common goal: a bi-directionally noisy coupling between the user and the external device. With this viewpoint, we address three key questions that are of crucial importance to elicit superior performance:
- what feedback should be delivered to the user;
- how the user should react to the feedback and its intended objective
to imagine the subsequent desired control command;
- how the external device should sequentially map its recorded neural
signals to a control action.
We discuss designing the protocol of interaction between the human and the external device from the lens of team decision theory, decentralized> control theory, and feedback information theory. As an exemplar application, we consider three brain machine interfaces. We formulate, solve, and implement team decision problems pertaining to (i) neural control of a robotic arm (ii) exact neural specification of a smooth path in two dimensions, and (iii) transfer of expertise in game strategy from a brain to an artificial intelligence algorithm without the subject volitionally performing - or imagining - motor outputs. Throughout the talk, we emphasize the need for not only a solid theoretical foundation but also a solution that has form-factor properties that allow it to be easily implemented by a human. Lastly, we remark on how this viewpoint is applicable to general human-machine interface systems and to more general networks beyond simply one human and one computer.
Biography: Todd P. Coleman received the B.S. degrees in electrical engineering (summa cum laude), as well as computer engineering (summa cum laude) from the University of Michigan, Ann Arbor, in 2000, along with the M.S. and Ph.D. degrees in electrical engineering from the Massachusetts Institute of Technology (MIT), Cambridge, in 2002, and 2005. During the 2005-2006 academic year, he was a postdoctoral scholar at MIT and Massachusetts General Hospital in computational neuroscience. From July 2006 - June 2011, he was an Assistant Professor in the ECE Department and Neuroscience Program at UIUC. As of July 1, 2011, he has been an Associate Professor of Bioengineering in the Jacobs School of Engineering and affiliated with the Institute for Neural Computation at UCSD.
His core research interests include applied probability (with in information theory, control theory, and statistics) as well as neuroscience. He is applying these applications to understanding causal influences in (neuronal/communication/social) networks, designing brain-machine interfaces from a team decision theory viewpoint, and designing novel non-invasive and invasive flexible electronics systems to probe and interrogate brain function.
In Fall 2008, he was a co-recipient of the University of Illinois College of Engineering's Grainger Award in Emerging Technologies for development of a novel, practical timing-based technology. Beginning Fall 2009, Coleman has served as a co-Principal Investigator on a 5-year NSF IGERT interdisciplinary training grant for graduate students, titled "Neuro-engineering: A Unified Educational Program for Systems Engineering and Neuroscience". Coleman also has been serving on the DARPA ISAT study group for a 3-year term, beginning Fall 2009.
2011-08-05
Even closer towards a theory of learning and levels, and why we need such a theory
Tony Bell Redwood Center for Theoretical Neuroscience, UC Berkeley Temporal Dynamics of Learnng Center, UC San Diego
+ moreHost: Terry Sejnowski
2011-06-09
On-Line Visuomotor Control In Parkinson's Disease
Jamie Lukos Poizner Lab
+ morePosterior parietal cortices are known to be critical for online visuomotor control, but the role of basal ganglia-cortical loops is poorly understood. To investigate this issue, we are studying patients with Parkinson’s disease (PD), on and off dopaminergic therapy, while they reach for and grasp a virtual rectangular object whose orientation occasionally rapidly changes during the reach. Our previous studies have led us to hypothesize that PD subjects will be most impaired in making corrective responses to the object perturbation when they cannot see their hand early in the movement, and that increasing tonic levels of dopamine will not reverse these impairments. Subjects grasped the virtual objects using two three-degree of freedom force-feedback robots (PHANToMs) that provided haptic interaction and feedback. Hand movements, eye movements and EEG were simultaneously recorded. In 25% of trials, the object was rotated during the reach and subjects had to adjust the size of their hand opening (aperture) online (perturbed trials). Moreover, on half of the trials, visual feedback of the hand was blocked from movement onset to 2/3rds of the reach. Preliminary results from 3 PD patients and 3 controls indicate that controls successfully adapted their grasp much more often than either PD off-meds or PD on-meds (71.7 vs. 48.4 and 45.1% correct grasps, respectively). For successful grasps, Fig. 1A shows individual trajectories of the thumb and index finger for one control and one PD patient. In the unperturbed trials, the control showed clear modulation of grip aperture over the course of the reach, but aperture modulation was nearly absent for the PD patient on or off medications. During the perturbed trials, the control subject generated a smooth correction throughout reach. In contrast, the PD patient generated a segmented corrective response, as if the adaptation was a separate event from hand transport. As expected, the control subject’s reach velocity was higher than that PD patient’s in all conditions. Fig. 1B indicates that blocking visual feedback of the hand greatly impaired the PD patient when off medication. The patient’s corrective response to the perturbed occurred more often after vision of the hand was restored. Patterns of eye-hand coordination are indicating that unlike controls, PD subjects look at their hands throughout the reach, thus operating in a mode of visual guidance rather than predictive control. These initial data are indicating that PD patients show marked motor control deficits in adapting to sudden environmental perturbations; that these deficits become particularly pronounced when PD patients cannot see their moving limb; and that dopamine repletion may remediate partial corrective response control to environmental perturbation when vision of the limb is removed. The association of the cortical EEG with these eye and hand dynamics is currently being analyzed.
Host: Gert Cauwenberghs
2011-05-19
Mapping multisensory representations of peripersonal space
Ruey-Song Huang
+ moreThis talk will present our recent progress in mapping multisensory
representations of peripersonal space using fMRI, with topics covering
both technical developments and scientific findings. Recently, we have
developed wearable techniques for high-density and/or wide-range tactile
stimulation in the MRI scanner. Sixty-four channels (expandable to 128) of
computer-controlled air puffs can be delivered via plastic tubes/nozzles
embedded in the air suit, including the face mask, turtleneck, gloves, and
pants. The wearable techniques open the possibilities of presenting more
complex tactile stimuli with programmable spatial-temporal patterns on the
body surface, e.g. 2-D tactile display or tactile apparent motion.
Multiple two-condition block-design scans revealed a high-level
somatotopic homunculus consisting of the parietal face, lip, finger, and
shoulder areas in the superior parietal lobe. Retinotopic mapping using
phase-encoded design and wide-field visual stimuli (masked videos or
looming objects) further revealed aligned visual-tactile maps in the same
areas. Tactile mapping revealed a high-level homunculus consisting of the
parietal face, lip, finger, and shoulder areas in superior parietal
lobule. Visual mapping revealed an aligned visual homunculus in the same
areas. A region of lower visual field representation in the post-central
sulcus partially overlaps with the parietal finger area, which is anterior
and lateral to the parietal face/lip areas. Another region of lower visual
field representation, superior and medial to the parietal face area,
partially overlaps with the parietal shoulder area. However, regions of
upper visual field representation were restricted to the parietal face
area. We suggest that aligned multisensory homunculi may play important
roles in combining visual and tactile information to facilitate movements
in the peripersonal space (e.g., eating involves hand-to-mouth
coordination in the lower visual field).
Host: Gert Cauwenberghs
2011-05-05
Wireless non-contact EEG
Yu Mike Chi
+ moreEEG technology has remained an indispensable tool for brian research as a result of its simplicity and low cost. Despite continual advancements towards decoding EEG signals for a multitude of applications, including brain-computer interfaces, medical diagnostics and consumer applications, widespread adoption of EEG technology has yet to occur. Conventional EEG sensors have always necessitated extensive preparation with wet electrodes and even scalp preparation, preceding it's use outside of laboratory conditions. In light of this limitation, dry electrodes, which do not require conductive gels, and non-contact electrodes, which can operate through hair, have been studied as an enabler towards practical, mobile EEG platforms. This talk will focus on a review of dry electrodes and the development of a new type of non-contact electrode. Previous attempts at building non-contact electrodes have been hampered by the limitations of the standard amplifiers available on the market. In this work, we have designed a fully custom integrated sensor front-end specifically to bypass many of the noise and accuracy problems encountered thus far.
http://www.isn.ucsd.edu/pubs/rbme10.pdf
Host: Gert Cauwenberghs
2011-04-07
Modeling natural facial behavior
Marni Stewart Bartlett Marian Stewart Bartlett
Computational Face Group
Machine Perception Lab
INC and Calit2, UCSD
http://mplab.ucsd.edu/~marni/
This talk reviews recent research in my lab modeling natural facial expression with automated systems. Automated systems enable new research into expression dynamics that was previously infeasible with manual coding, or which would have required application of electrodes to the face, which can influence facial behavior. The talk first describes projects on measurement of dynamic coupling of facial behavior to measure spontaneous mimicry, as well as detection of deception. We show that facial mimicry correlates with the ability to detect when a person is lying. This had long been hypothesized by embodied theories of cognition but never previously shown. These findings were made possible by the use of novel computer vision techniques that allowed us to obtain rich quantitative information about facial dynamics. The talk next describes development of interventions for children with autism. The interventions employ computer vision systems to train facial expression production, provide practice in facial mimicry, and immediate feedback on the child's facial expressions. Finally, if time permits, I will review our work on children's facial behavior during problem solving. Clustering techniques are employed to demonstrate differences in expression dynamics between older and younger children during problem solving.
Host: Gert Cauwenberghs
2011-03-24
Multimodal Imaging Of Resting-State Functional Connectivity
Tom Liu Center for Functional MRI
Department of Radiology, UCSD
http://fmri.ucsd.edu/people/ttliu.html
In the absence of an explicit task, the "resting" brain exhibits large spontaneous fluctuations that exhibit coherence within multiple functional networks. To date, our knowledge of these resting-state networks has come primarily from measurements of fluctuations in the blood oxygenation level dependent (BOLD) signal used in functional magnetic resonance imaging (fMRI). However, because the BOLD signal is a complex function of both neural and vascular factors, the interpretation of changes in resting-state BOLD connectivity is not always straightforward. For example, we have found that caffeine significantly reduces resting-state BOLD connectivity in multiple networks, but it is not clear whether this reduction reflects a true decrease in neural connectivity versus a secondary effect of the caffeine-related decrease in cerebral blood flow. To resolve this question, we are using simultaneously acquired EEG and fMRI measures, as well as a linked set of MEG measures, to determine the extent to which fMRI measures reflect underlying changes in neuroelectric connectivity. In this talk, I will describe approaches for assessing connectivity using the various modalities and present preliminary results comparing the multimodal measures.
Host: Gert Cauwenberghs
2011-03-10
Determining Functional Connections Among Neurons Based Upon Their Activity Patterns
Bill Kristan Section of Neurobiology Division of Biological Sciences, UCSD
http://www.biology.ucsd.edu/labs/kristan/
My major research interest is finding neuronal circuits that underlie behavior, using the nervous system of the medicinal leech. We use electrophysiological recordings and voltage-sensitive dye imaging to determine which leech neurons are active during the several leech behaviors: swimming, crawling, shortening, and local bending. We are now using these recordings to identify all the neurons and to predict the connectivity among them. We use a variety of correlation techniques to predict connections between pairs of neurons intracellular recordings to test our predictions. These techniques should be useful for all kinds of multi-unit recordings, including calcium imaging and multi-unit arrays.
Host: Gert Cauwenberghs
2011-02-24
Engineering advances in mapping functional connectivity in cellular networks
Marius Buibas Silva Laboratory, Department of Bioengineering, UCSD
+ moreWe have developed a theoretical framework for estimating the causal functional connectivity in neuronal cellular networks from experimental data, that employ both parametric and non-parametric approaches, and are implemented on parallel graphics processor units (GPU). This talk will discuss the theoretical methods, experimental requirements, and performance of using this framework. Additionally, I will present control-theoretic tools to measure network-level stability, observability, controllability, with implications to understanding disease and the actions of remedies on network dynamics. Finally, I will discuss the problem of uniqueness of or degeneracy in functional connectivity estimates, with implications on the interpretability of experimental data.
Host: Gert Cauwenberghs
2011-02-03
Active Noise Control and Biomedical Signal Processing
Muhammad Tahir Akhtar The University of Electro-Communications, Chofu-shi, Tokyo, Japan
Sabbatical Visiting Scholar, INC
Abstract: Dr. Akhtar will present an overview of recent research in adaptive filtering for single-channel and multi-channel active noise control (ANC), and extensions to biomedical signal processing. We consider the following problems in ANC: 1) effect of measurement noise in single-channel ANC systems, 2) online secondary path modeling, 3) online acoustic feedback path modeling and neutralization, 4) ANC for impulse-like noise sources, and 5) effect of uncorrelated disturbance at the error microphone. The talk will focus on our recent results mitigating uncorrelated disturbance in ANC systems.
He will also present recent results extending these signal processing techniques to electroencephalography (EEG), mainly for artifact removal using independent component analysis (ICA) and blind source separation (BSS). Our focus is on ICA and wavelet based approaches for de-noising EEG signals, and on maintaining continuity in BSS for long EEG recordings. Our current research at INC is directed to further extending the effectiveness and efficiency of these algorithms for EEG and biomedical signal processing.
Biography: Muhammad Tahir Akhtar received the B.S. degree in electrical engineering from the University of Engineering and Technology Taxila, Pakistan, in 1997, the M.S. degree in systems engineering from Quaid-i-Azam University, Islamabad, Pakistan, in 1999, and the Ph.D. in electronic engineering from Tohoku University, Sendai, Japan, in 2004. From 2004 to 2005, he was a COE postdoctoral fellow at the Department of Electronic Engineering, Tohoku University.
Currently he is working as an Assistant Professor at the Center for Frontier Science and Engineering (CFSE), The University of Electro-Communications, Tokyo, Japan, and a Special Visiting Researcher at The Center for Research and Development of Educational Technology (CRADLE), Tokyo Institute of Technology, Tokyo, JAPAN, and a Sabbatical Visiting Scholar at INC. His research interests include active noise control, adaptive signal processing, blind source separation and biomedical signal processing. Dr. Akhtar won Best Student Paper Award at the IEEE 2004 Midwest Symposium on Circuits and Systems, Hiroshima, Japan.
Host: Gert Cauwenberghs
2010-06-17
the musical brain
Carol Lynne Krumhansl Cornell University
+ moreThe talk presents research showing that the musical brain contains information from the very abstract to the very concrete. An empirical test of a recent music-theoretic proposal concerning musical tension demonstrates that the cognitive representation of musical structure includes hierarchical trees similar to those proposed for language and that deeply theorized properties of music link to cognitive processes. At the other extreme, studies on music recognition suggest a great deal of surface information is encoded in memory. Very short excerpts of popular music can be identified with artist, title, and release date. Even when an excerpt is not identified, emotion and style judgments are consistent. This suggests that musical memory is extremely detailed and has an extraordinarily large capacity and also contains schematic information for identifying emotional content and style.
Bio: Carol Lynne received a B. A. and M. A. in mathematics from Wellesley College and Brown University, respective. In 1978, she received a Ph. D. in mathematical psychology from Stanford University, primarily under the supervision of Roger Shepard. Since 1980, she has been on the faculty of Cornell University where her research has focused on music cognition. The major strand of her research is the cognition of tonality, the primary organizing principle of Western music. She is author of Cognitive Foundations of Musical Pitch. Other research has included studies of musical rhythm and timbre, dance, musical performance, emotion, contemporary proposals in music theory, and the neuroscience of music. She is on sabbatical leave in San Diego for the academic year 2010 - 2011.
Host: Howard Poizner
2010-06-10
from sleep to consciousness in Drosophila
Ralph J. Greenspan Kavli Institute for Brain and Mind, UCSD
+ moreThe cognitive potential of the fruit fly Drosophila melanogaster has been extensively probed in recent years and, as a result, our estimation of its sophistication has grown considerably. How do they do it? Do these invertebrates accomplish such feats by an altogether different mechanism than we do? Our research addresses these questions from the standpoint of probing brain states in the fruit fly from the deepest sleep to the highest state of alertness, using a combination of genetic, physiological, and behavioral approaches.
At the molecular level, the fruit fly shares many features of sleep regulation with mammals, of which the dopaminergic and EGFR signal transduction systems are prominent. In the realm of higher arousal, the fruit fly displays many of the key elements of attention: orientation, expectancy, stimulus discrimination and suppression, and sustainability. Finally, they share a critical physiological feature with attention and consciousness states in humans: an increased degree of coherence (phase-locking) among multiple brain regions during the attention-related task.
While it is not productive to spend too much time worrying about whether fruit flies are conscious, they may possess some of the same requisite, underlying mechanisms, and thus are worthy of further study in this direction.
2010-05-27
this is your brain on politics
Darren Schreiber Department
+ moreIn political science, we have long had low levels of explanatory power with conventional models. Accounting for just a quarter of the variance is usually a tremendous accomplishment and often requires many independent variables and sophisticated statistical techniques. Two dogmas of the discipline, the behaviorist approach and rational choice theory, preclude biological explanations. In this talk, however, I will review a variety of results that show how some of the central phenomena of interest in the field can be accounted for using work based in genetics and neuroscience. I'll discuss work on race, political sophistication, voter turnout, and partisanship. And, I will show how we can use fMRI to predict your political party affiliation with shocking accuracy and evidence of the biological basis of egalitarianism.
2010-05-13
building brains
Steve Furber Computer Science Department, University of Manchester
+ moreComputer Technology has advanced spectacularly since the first program was executed by the Manchester 'Baby' machine on June 21 1948, but if this progress is to be sustained there are major challenges ahead in the area of transistor predictability and reliability and in the exploitation of massively-parallel computing resources. Biology has solved both of these problems, but we don't understand how those solutions function at the level of information processing. Two questions arise from this line of thinking:
* Can massively-parallel computers be used to accelerate our understanding of brain function?
* Can our growing understanding of brain function point the way to more efficient, fault-tolerant computation?
While these questions remain so far unanswered, they suggest a line of investigation that has been recognized under the Grand Challenge of 'Building Brains'.
Bio: Dr. Furber received his B.A. degree in Mathematics in 1974 and his Ph.D. in Aerodynamics in 1980 from the University of Cambridge, England. From 1980 to 1990 he worked in the hardware development group within the R&D department at Acorn Computers Ltd, and was a principal designer of the BBC Microcomputer and the ARM 32-bit RISC microprocessor, both of which earned Acorn Computers a Queen's Award for Technology. Upon moving to the University of Manchester in 1990 he established the Amulet research group which has interests in asynchronous logic design, power-efficient computing, and neural systems engineering where the major activity is the SpiNNaker project. This project's focus is on building a massively-parallel chip multiprocessor system for modeling large systems of spiking neurons in real time. The ultimate goal is to build a machine that incorporates a million ARM processors linked together by a communications system that can achieve the very high levels of connectivity observed in biological neural systems. Such a machine would be capable of modeling a billion neurons in real time (which is still only around 1% of the human brain).
Host: Qualcomm CTO Dr. Roberto Padovani
2010-04-29
virtual grasping in Alzheimer's disease
Joe Snider, Dongpyo Lee, Deborah Harrington, Howard Poizner Institute for Neural Computation, UCSD
+ moreWe will present data from an ongoing study into the nature of the neural and behavioral deficits of patients with Parkinsons disease (PD). We have hypothesized that PD motor deficits are of two distinct types, one due to loss of gain resulting in small and slow movements, and the other due to loss of precise, differentiated basal ganglia function resulting in poorly coordinated movement. We further hypothesized that dopamine replacement therapy may remediate the former but not the latter type of deficit. We tested this hypothesis using a novel paradigm in which subjects used two haptic robotic devices to reach to and grasp virtual objects. The objects had different dynamic properties and spatial orientations relative to gravity. 21 PD patients, on and off dopamine medication, and 24 age-matched controls have been tested. PD patients off medication showed significantly reduced peak velocities during the reach. In addition, they inappropriately timed and coordinated the opening of their fingers during the reach with the transport and changes in orientation of their arm. After touching the object, subjects have to switch their action from translating the hand to lifting the object, and that switch was significantly delayed in PD patients. During the lift, PD patients were unable to maintain the specified lift trajectory, a task requiring coordination of the entire hand-arm system. Dopamine replacement therapy significantly increased patients peak reach velocities and the squeeze forces used, but minimally ameliorated their coordination deficits. Thus, repletion of dopamine in the degenerated basal ganglia is not sufficient to restore patterns of neuronal firing required to support coordinated sensorimotor processing.
In a second phase of the study, these same subjects on and off dopamine medication performed a finger sequencing task during fMRI. In collaboration with Deborah Harringtons group, we will be correlating disease-related patterns of brain activity with the behavioral deficits shown in the task described above.
2010-04-15
brain-computer interaction
Thorsten Zander Technische Universitaet Berlin, and INC, SCCN
+ moreThe introduction of modern methods from machine learning to the field of brain machine interfaces (BCIs) has reduced the typically high level of effort required to use a BCI based system, thereby increasing its range of usability, efficiency, and joy of use. I will present our work on the first hybrid BCI combining gaze control with BCI, and the first passive BCI overriding the necessity for involving focused volitional control incorporated into a game-based human-machine system. The results show that BCI based technology is capable of detecting covert aspects of user state, i.e., aspects not detectable from external measures of the behavior of the user for the optimization of human-machine systems. In particular, our work on passive BCI with SCCN investigated a covert aspect of user state by detecting bluffing in a game context. These results and their impact on cognitive neuroscience research and human-machine interactive systems demonstrate that BCI technology can be used beneficially beyond applications for neural prostheses, inspiring to a broadening of the initially restricted definition and purposes of BCI.
Baernreuther B., Zander, Reissland, Kothe, Jatzev, Gaertner, Makeig S.: Access to covert aspects of user intentions: Detecting bluffing in a game context with a passive BCI. Fourth International BCI Meeting, Carmel, CA, June 2010.
Pfurtscheller, Allison, Bauernfeind, Brunner, Solis-Escalantes, Scherer Zander: The Hybrid BCI. Frontiers in Neuroprosthetics, 2010.
Zander T.O., Gaertner M., Kothe C., Vilimek R.: Combining Eye Gaze Input with a Brain-Computer Interface for Touchless Human-Computer Interaction. International Journal of Human-Computer Interaction, in press.
Zander T.O., Kothe C., Jatzev S., Gaertner M.: Enhancing Human-Computer Interaction with input from active and passive Brain-Computer Interfaces. In Tan, Nijholt (Eds.): Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction, in press.
2010-04-01
ERG and electrophysiology of the retina
Gabriel Silva Jacobs Faculty Fellows Professor of Bioengineering, Departments of Bioengineering and Ophthalmology, UCSD
+ moreElectroretinography (ERG) is a non-invasive method that allows measuring the global electrophysiological response of the neural sensory retina. It can be used both for studying neurophysiology and for characterizing and diagnosing diseases associated with neural retinal dysfunctions. Depending on the specific method used, the ERG can provide information on different cell types in the retina as population averages or more restricted geometric localizations. This talk will introduce some of the methods involved, focusing on neurobiological and engineering considerations, and will discuss the use of the ERG to computationally isolate the full time course of the pure photoreceptor neuron population response from the full field ERG.
2010-03-25
wireless EEG BCI
Yijun Wang, Yu-Te Wang and Tzyy-Ping Jung Swartz Center for Computational Neuroscience, INC, UCSD
+ moreTransitioning brain-computer interfaces (BCI) from laboratory demonstration to real-life applications poses severe challenges to the BCI community [1][2]. With advances in biomedical sciences and electronic technologies, the development of mobile and online BCI has received increasing attention in the past decade. To implement a mobile BCI with online processing, a mobile terminal such as a mobile phone or a PDA presents an ideal platform for data transmission, signal processing, and feedback presentation. In this chalk talk we present an online BCI based on a mobile and wireless EEG acquisition module and a cell phone, and discuss implications of this BCI platform technology as an enabling technology for interactive cognitive neuroscience and clinical applications in neuroengineering.
[1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, "Brain-computer interfaces for communication and control," Clin. Neurophysiol., vol. 113, no. 2, pp. 767-791, 2002.
[2] Y. Wang, X. Gao, B. Hong, and S. Gao, "Practical designs of brain-computer interfaces based on the modulation of EEG rhythms", in B. Graimann, G. Pfurtscheller (Eds.) Invasive and Non-Invasive Brain-Computer Interfaces, Springer, The Frontiers Collection, 2009.
2010-02-18
Towards Neocortical Vision In Silicon
Gert Cauwenberghs Institute for Neural Computation, UCSD
+ moreWe are embarking on an exciting journey in our continued and renewed efforts, with the DARPA Neovision2 program, towards reverse engineering the visual system in silicon. I will share the visions and plans of our team that spans the two coasts and the spectrum between neuroscience and neuroengineering. I will also briefly present a scalable approach to realizing locally dense and globally sparse connectivity in large-scale reconfigurable neuromorphic systems, towards a real-time and low-power silicon model of neocortical vision with over a million neurons and a billion synapses.
2010-02-04
motor cortex dynamics
Terry Sejnowski Institute for Neural Computation, UCSD
+ moreAlthough many neurons in the primary motor cortex (M1) project directly to the spinal cord, how they control movements is not yet understood. Some M1 neurons represent intrinsic dynamical variables such as muscle tensions, whereas other neurons code for extrinsic kinematic variables such as movement trajectories. Hiro Tanaka and I have reconciled these observations by showing that the equations of motion governing reaching simplify in spatial coordinates. The performance of human-machine interfaces might be improved by computing joint torques from neural activity in M1 using a spatial reference frame.