Machine Perception Laboratory


Click here
for a PDF of the poster list

Poster Session

AUDITION
MOLECULAR / CELLULAR LEVEL
CIRCUITS
SPATIAL NAVIGATION
MEMORY
VISION
SENSORI-MOTOR / DECISION MAKING
COGNITION AND SLEEP
MODELING / THEORY
NEUROMORPHIC / TECHNOLOGY

 

AUDITION


Audio-Vocal Feedback in Bat Biosonar as a Model for Embodied AI
Rolf Mueller
, Virginia Tech
Abstract: Bat biosonar has been one of the earliest model systems in neuroethology, mostly due to the prominence of the auditory system in the animals' brains and the straightforward nature of a number of sonar-sensing tasks such as target ranging or direction finding. However, the most admirable capabilities of bat biosonar are unlikely to be explained in terms of the simple sensory paradigms that have accounted for the vast majority of neuroscience experiments in bats. A notable example of these challenging tasks is navigation in complex natural environments that bats with sophisticated biosonar systems have mastered. When navigating in dense vegetation, for example, bats have to base their motor control on echoes that originate from many reflecting facets that are referred to as "clutter echoes" in sonar engineering. The individual waveforms of clutter echoes are exceedingly hard to predict and hence have to be regarded as random. Nevertheless, the bats must be able to obtain detailed and reliable information on their environments from these input signals. The biosonar systems that are able to deliver this sensory performance are characterized by a high degree of integration of functions across the computational and the physical domain. Notable biosonar functions in the physical domain include the time-frequency shapes of the emitted biosonar pulses and the time-variant geometries of the emitting and receiving structures ("noseleaves" and pinnae). All these physical domain features are under neural control, most likely in the form of feedback loops that rely on the received echoes as input. Hence, the biosonar system of bats could be understood as an "embodied AI" system that derives a substantial portion of its performance from synergies due to a controlled, dynamic integration of the physical and computational domains. Understanding these synergies requires insights into the stimulus ensembles of demanding biosonar tasks and how they shape the function of the involved feedback circuits. To obtain these insights, we are exploring the sensory world of bats that navigate in dense vegetation with a biomimetic sonar system and use AI data analytics to find meaningful variations in these data sets that can inform the design of neurophysiological experiments.

Auditory-Related Function and Connectivity of the Human Pulvinar
Alexis Simons,
Washington University in St. Louis
Abstract: The pulvinar nucleus of the thalamus has been widely theorized to play a crucial role in visual attention and information integration between brain regions. The pulvinar is known to be widely connected to auditory processing brain regions in primates, but a lack of electrophysiological sampling of the human pulvinar has limited our current understanding of its role in auditory processing. To build on our current understanding, we analyzed the electrophysiology of the human pulvinar in eleven adult neurosurgical patients. As part of their standard treatment procedure, all patients were implanted with stereoelectroencephalography (sEEG) electrodes that included contacts located in the pulvinar and brain regions involved in auditory processing, such as the superior temporal gyrus and Heschl’s gyrus. To evaluate auditory-related pulvinar function, participants underwent a passive auditory language task while AEPs were measured using the implanted sEEG electrodes. All participants understood English and had no history of hearing impairments. Each trial consisted of a 0.7-second auditory stimulus followed by a 1-second silent (baseline) period. Auditory stimuli consisted of 64 words and 64 non-words delivered through speakers. Task-related spectral perturbations were observed at pulvinar contacts with relatively inferior pulvinar channels showing a statistically significant increase in broadband gamma (70-170 Hz) amplitude during the task (Wilcoxon rank-sum test). Morphological differences between word and non-word stimulus responses were observed across pulvinar channels. Participants also underwent electrical stimulation mapping during which single pulse electrical stimulation (SPES) was delivered to the pulvinar, and cortico-cortical evoked potentials (CCEPs) were measured at all other electrode contacts. Statistically significant responses (evaluated by comparing pre- and post-stimulation signal root mean square) were detected across known auditory processing brain regions, including Heschl’s and the superior temporal gyri. This result aligns with our hypothesis that the pulvinar may play a crucial, yet understudied role in auditory perception.

go to top



MOLECULAR / CELLULAR LEVEL


Machine Learning-Aided Antibody Discovery for the Mu-Opioid Receptor
Rob Meijers,
Institute for Protein Innovation, Harvard Institutes of Medicine
Abstract: The Mu-opioid receptor (MOR) is the primary target of fentanyl and other addictive opioids at the center of the opioid crisis. Affinity reagents that could serve as long-acting medical countermeasures or tools to study MOR biology are difficult to obtain, because MOR is a membrane-embedded G-protein–coupled receptor that is challenging to produce and exposes limited epitope space for antibody recognition. In collaboration with the labs of Christoph Stein (Charité Berlin) and Markus Weber (Zuse Institute), and supported by the National Science Foundation, we generated antibody candidates for MOR using a minimalistic antibody display library combined with machine learning protocols. The library’s diversity is restricted to the third complementarity-determining region (CDR3) of the heavy chain, greatly reducing the complexity of the repertoire. Using next-generation deep sequencing, we obtain a simple, shorthand description for each antibody candidate, enabling the application of straightforward pattern recognition protocols to identify MOR-specific binding features across the repertoire. We have demonstrated that this approach allows efficient discovery of many antibody candidates for a range of cell surface receptors (Kothiwal et al. 2025). Here, we apply this protocol to MOR and obtained several antibody candidates currently under evaluation by our German collaborators. Notably, one is a bystander antibody that does not interfere with MOR function, making it ideal for studying MOR physiology and as a carrier to deliver secondary payloads for targeted modulation of MOR activity.

Identifying Manifold Degeneracy and Estimating Confidence for Parameters of Compartmental Neuron Models with Hodgkin-Huxley Type Conductances
Namazifard Saina,
Baylor College of Medicine
Abstract: Much work has been devoted to fitting the biophysical properties of neurons in compartmental models with Hodgkin-Huxley type conductances. Yet, little is known on how reliable model parameters are, and their possible degeneracy. For example, when characterizing a membrane conductance through voltage-clamp (VC) experiments, one would like to know if the data will constrain the parameters and how reliable their estimates are. Similarly, when studying the responses of a neuron with multiple conductances in current clamp (CC), how robust is the model to changes in peak conductances. Such degeneracy is linked to biological robustness and is key in understanding the constraints posed by conductance distributions on dendritic computation. To address these issues, a compartmental model with Hodgkin-Huxley (HH) type conductances was used. We studied synthetic and experimental VC data of the H-type conductance (gH) that is widely expressed in neuronal dendrites. We also studied the original HH model in VC and CC. Finally, we considered a stomatogastric ganglion neuron model in CC. The ordinary differential equation solutions, parameters, and their sensitivities were simultaneously estimated using collocation methods and automatic differentiation. This allowed us to solve the non-linear least squares (NLLS) problem associated with each model. Parameter degeneracy manifold iterative tracing was performed based on the singular value decomposition of the NLLS residual Jacobian. We have also introduced a new objective function that allows us to identify parameter degeneracy manifold for membrane potential trajectories that are periodic at steady state and only differ in relative spike timings. This objective function is based on membrane potential characteristics, including the shape of the repeating spiking pattern, dominant frequencies and mean value, allowing to effectively eliminate the effect of phase shifts.

Deep Conservation of Amacrine Cell Diversity Throughout Vertebrate Evolution
Dario Tommasini,
UC Berkeley
Abstract: Amacrine cells (ACs) are the most heterogeneous class of inhibitory neurons in the vertebrate retina, exhibiting morphological and functional complexity comparable to that of cortical interneurons. However, little is known about the nature of variation and specialization among AC types across the vertebrate phylogeny. Here, we integrate single-cell and single-nucleus transcriptomic atlases from 20 vertebrate species to reconstruct the evolutionary history of AC diversity. Through multi-species co-clustering, we identify 42 orthologous AC types (oACs), many of which exhibit a one-to-one correspondence across tetrapods and, in several cases, across all vertebrates. While deeply conserved in core molecular identity, AC types vary in abundance and specialization across species, likely reflecting adaptations to distinct visual ecologies. AC diversity scales with that of retinal ganglion cells (RGCs), indicative of co-evolution. Finally, we provide evidence that glycinergic ACs diverged early in vertebrate evolution, followed by a bifurcation between RGCs and GABAergic ACs, supporting a model in which these two classes share a common ancestral precursor. Together, these findings establish a unified evolutionary framework for understanding the diversity, development, and function of ACs across vertebrates.

Multiscale dynamics of neuronal electricity: how the effects of ion channel currents propagate at the neuronal membrane interface
Karthik Shekhar,
University of California, Berkeley
Abstract: Bioelectricity has until now been treated as a purely “electrical” phenomenon, and quantitative frameworks typically neglect the effects of temperature, membrane mechanics, and ionic reorganization. Classical mechanistic models, most notably the equivalent circuit framework pioneered by Hodgkin and Huxley, have been central to our understanding of bioelectricity. However, these models treat the living membrane as composed of lumped electrical elements, missing key physics that is central to the underlying biological: the localized (nanoscale) nature of ion transport through specialized channels and pumps; diffuse charge reorganization at membrane interfaces; and the deformable nature of the lipid bilayer, which enables coupling of electrical and mechanical effects.

In this talk, I will present recent and ongoing work, rooted in theory and computation, exploring the electrochemical response of neuronal membranes with diverse geometries under localized ionic currents and applied electric fields (Row et al., Physical Review Research, 2025; Farhadi et al., Physical Review E, 2025; Fernandes et al., arxiv:2508.14001, 2025). I will describe how dielectric mismatch, capacitive effects, and membrane geometry can shape long-range signal propagation, complex spatiotemporal dynamics, and drive mechanically induced instabilities resulting from ion channel currents. The underlying work is rooted in the Poisson-Nernst-Planck framework, combined with the electromechanical analysis of curved membranes containing ion channels. By combining analytical theory with multiscale simulations, we hope to develop a mechanistic picture of the coupled electrochemical and mechanical behavior of neuronal membranes at the nano and microscale. These insights open new avenues for predicting, interpreting, and potentially controlling membrane dynamics in both natural and engineered excitable systems.

go to top



CIRCUITS

Maturation of Dynamic Propagations and Neuromodulatory Substrates in Nonhuman Primates
Ting Xu,
Child Mind Institute
Abstract: The brain’s spatiotemporal dynamics are crucial for cognition and are shaped by neuromodulatory systems during development (Meyer-Baese, 2022). While recent work has identified primate-specific dynamics, their maturation in relation to underlying molecular systems remains poorly understood. In this study, we analyzed resting-state fMRI data from a developmental macaque cohort alongside neurotransmitter receptor maps derived from post-mortem autoradiography. We aim to characterize spatiotemporal propagations in the NHP brain, examine their developmental changes during macaque childhood, and investigate how neuromodulatory systems contribute to the maturation of functional dynamics in NHPs.

We analyzed resting-state fMRI data from two macaque cohorts shared by the PRIMatE Data Exchange (PRIME-DE): a discovery set (N = 103; age range: 0.85–4.42 years) and a replication set (N = 346; age range: 1.08–2.71 years), both from the University of Wisconsin-Madison. Using Complex Principal Component Analysis (CPCA), we extracted recurring spatiotemporal propagations and quantified their dominance and regional characteristics across development. Neurotransmitter receptor density maps were 3D reconstructed from autoradiography studies (Funck, 2022) and aligned to a macaque brain template. We then correlated the fMRI dynamic propagations maps with receptor density patterns to probe their molecular underpinnings.

Three primary propagation patterns, replicated across cohorts, explained over 50% of fMRI variance. The dominance of the principal pattern (Pattern 1) significantly declined with age (ρ=-0.34, p < 0.001). Developmentally, we observed decreasing propagation amplitudes in visual and default-mode networks and increasing amplitudes in prefrontal and somatosensory regions. Critically, the molecular association of Pattern 1 shifted significantly with age, transitioning from an initial alignment with inhibitory GABAA receptors to a mature alignment with excitatory glutamate (AMPA, Kainate) receptors.

Our findings reveal a key maturational principle of primate brain dynamics: a functional transition from inhibitory to excitatory systems governance. This highlights a dynamic rebalancing of excitation-inhibition as a core mechanism of neural circuit refinement during development. This work provides a novel framework for linking molecular and systems-level maturation and offers crucial insights into the neurodevelopmental basis of primate brain function.

Channel-Wise Attention Masks for Brain Connectome Fingerprinting
Haiwen Wang, Rutgers University
Abstract: Recent human Neuroscience studies suggest that individual brains have patterns of inter-regional coordination that are as unique as an individual fingerprint. In this paper we take a data-driven, learning-based approach to investigate the fingerprinting property, identify persistent data patterns, and estimate them from large-scale resting-state fMRI datasets. We first confirm that the fingerprint is robust under a variety of correlation-based similarity measures that emphasize vertex correspondence. We develop a lightweight channel-wise attention model to learn a sparse personalized fingerprint mask that extracts the elements in a connectivity pattern that contribute the most to the fingerprinting property. We show that by multiplying a connectivity matrix by an individual fingerprint mask improves the fingerprinting accuracy across a broad family of similarity measures.

go to top


 

SPATIAL NAVIGATION


Vector-based navigation inspired by directional place cells
Harrison Espino
, University of California Irvine
Abstract: We introduce a navigation algorithm inspired by directional sensitivity observed in CA1 place cells of the rat hippocampus. These cells exhibit directional polarization characterized by vector fields converging to specific locations in the environment, known as ConSinks. By sampling from a population of such cells at varying orientations, an optimal vector of travel towards a goal can be determined. Our proposed algorithm aims to emulate this mechanism for learning goal-directed navigation tasks. We employ a novel learning rule that integrates environmental reward signals with an eligibility trace to determine the update eligibility of a cell's directional sensitivity. Compared to state-of-the-art Reinforcement Learning algorithms, our approach demonstrates superior performance and speed in learning to navigate towards goals in obstacle-filled environments. Additionally, we observe analogous behavior in our algorithm to experimental evidence, where the mean ConSink location dynamically shifts toward a new goal shortly after it is introduced.

Dendritic Dynamics for Compartment-Specific Learning
G. William Chapman,
Sandia National Laboratories
Abstract: Spatial navigation requires the formation of coherent, map-like representations of the environment while simultaneously tracking current location, typically in a largely unsupervised manner. Extensive evidence from mammalian neocortex and subcortical structures suggests that primary sensory areas, intermediate regions such as the retrosplenial cortex (RSC), and the hippocampal formation interact bidirectionally to support flexible transformations between egocentric and allocentric codes. Building on the theory of predictive learning, we propose a biologically-plausible learning rule in which proximal inputs drive spiking activity, while distal dendritic inputs modulate burst activity.

Learning then occurs on proximal dendrites based on a three-factor rule which combines the postsynaptic activity, presynaptic trace, and a longer-term trace of mean burst rate. This mechanism enables distal-guided learning of proximal synaptic weights. Implemented in an anatomically constrained sensory–RSC–hippocampal loop, this rule supports learning sensory-driven observations that align with internally generated expectations through contrastive learning of the feedforward firing rate with feedback expectations on distal dendrites.
We train this network in a simulated virtual environment and evaluate learned tuning curves with respect to behavioral variables. We find that single-compartment dendritic models qualitatively resemble experimental findings: allocentric and egocentric representations emerge in the expected order along the processing hierarchy, reflecting a population-level transformation from raw sensory input to egocentric codes. Population-level analyses show similar results, with increasing allocentric decodability along the processing hierarchy. However, these models suffer from catastrophic interference when exposed to novel environments and support only unidirectional transformations from sensory to allocentric representations.

Finally, we discuss how coordinate transforms of the level of individual neurons may enable separation of encoding and retrieval populations to mitigate catastrophic interference and enable bidirectional transformations. We then extend the model to include neurons with multiple dendritic compartments. This branching architecture enables coordinate frame transformations to be learned at the level of individual neurons rather than relying on distributed population codes, by approximating matrix-matrix multiplication of inputs.

Distinct Physiological Properties Differentiate Rodent Area 29E From Other Parahippocampal Areas
Bharath Krishnan
, Johns Hopkins University
Abstract: Brodmann area 29e has been considered by different anatomists as a part of retrosplenial cortex, presubiculum (PR), or parasubiculum (PA). Despite being recognized as a distinct anatomical subdivision, its physiological and functional properties remain largely uncharacterized in rodents. We performed simultaneous tetrode recordings in five areas – area 29e, medial entorhinal cortex (MEC), postrhinal cortex (PO), dorsal PR, and PA – as rats (n=5) freely navigated two different environments: (i) a circular track in a planetarium-style virtual reality “dome” with visual landmarks and (ii) a square open arena. Across all recorded regions, power in theta (6-10 Hz) and mid-gamma (50-90 Hz) bands of local field potentials (LFP) were higher compared to background 1/f activity. 29e and PR had lower theta band power compared to other regions, but there were no regional differences in power in the mid-gamma band. Single neurons in 29e exhibited lower firing rates compared to other regions. Compared to other brain areas, 29e neurons had lower theta rhythmicity in spike autocorrelograms and weaker coupling to LFP theta. However, 29e neurons showed greater rhythmicity in the gamma band. When the landmark array was rotated, 29e neurons with weaker spike-theta coupling were more strongly locked to landmarks than neurons with stronger coupling. Similarly, 29e neurons that responded more strongly to changes in the luminance contrast of landmarks had weaker theta coupling. The population of contrast-responsive cells was more strongly locked to visual landmarks when compared to 29e neurons without significant contrast responses. In the open arena, among spatially modulated 29e neurons, egocentric-HD and nondirectionally modulated cells showed significantly weaker theta coupling than allocentric-HD cells. 29e egocentric-HD and nondirectionally modulated neurons exhibited stronger landmark locking than 29e allocentric-HD neurons. The relative lack of theta‐band LFP and theta‐rhythmic spiking, together with prominent gamma oscillations and gamma‐rhythmic firing, support a visual processing role for 29e that is distinct from neighboring parahippocampal regions. Given the strong projections from area 29e to dorsal presubiculum, these findings identify 29e as a functionally distinct parahippocampal area that could serve as a route by which visual landmarks anchor an animal’s internal sense of direction to the external world.

Increased processing time preserves spatial information across reference frames: The role of posterior medial cortex
Liz Chrastil
, University of California Irvine
Abstract: Long-standing models of spatial processing posit that posterior medial cortex (PMC) regions such as the retrosplenial cortex (RSC) are crucial for translating spatial knowledge across viewpoint-dependent (egocentric) and observer-independent (allocentric) frames. In such models, changes to the relative utility of the egocentric or allocentric frame should result in either lower precision to maintain response speed or slower responding to maintain precision, but this prediction has not been directly tested. Inspired by previous work on task switching, we developed a novel spatial memory stay-switch paradigm where participants recalled information about spatial relationships from both a first-person (egocentric) or top-down (allocentric) frame of reference. Participants learned the layout of a 100 m x 150 m virtual town environment by navigating to eight distinct stores within the town. In the spatial memory task, participants were told they would complete randomized trials from two spatial memory tasks. First-person trials showed a viewpoint from a virtual desert with only a single target store visible in front of the participant and a text prompt for a target store; participants turned to face the target store before submitting their response. Top-down trials showed an aerial view of the virtual desert (circle) with an icon at the center of the screen indicating the participant’s location relative to a store directly above it on the screen (establishing a heading comparable to the first-person trials) and a text prompt for a target store; participants rotated a pointer to the direction of the target store before submitting their response. Critically, approximately half of the trials stayed in the same format as the preceding trial (both top-down or both first-person) while the other half switched (top-down -> first-person and vice versa). Data from 27 neurotypical adults were analyzed to compare cognitive costs associated with implicit reference frame switching. Memory precision was measured as the absolute angular error between the correct target heading and the heading indicated by participants and processing time was measured using the elapsed time between trial onset and participants’ responses. Results indicate that participants incurred a significant processing time cost for switch trials while memory precision did not differ across trial formats or switching. These results provide direct support for translator models of reference frame switching. Ongoing neuroimaging work is being conducted to further test whether RSC or other PMC regions indeed support reference frame switching.

Seconds-timescale Cholinergic Modulation of CA1 facilitates Path Integration
Zhuoyang Ye
, Max Planck Florida Institute for Neuroscience
Abstract: Spatial navigation is vital for survival, enabling foraging and threat avoidance. Navigation relies on path integration to track distance or time, a process critically dependent on the hippocampus. The hippocampus receives strong cholinergic input from the medial septum (MS), yet the precise cellular mechanisms by which this input shapes path integration remain largely unknown. We combined in vivo dual-color two-photon imaging with optogenetics in head-fixed mice performing one-dimensional path integration in a virtual reality environment. Our imaging data revealed that cholinergic axons from the MS to the hippocampal CA1 exhibited a rapid increase in activity at the onset of integration, followed by a decay over several seconds. Optogenetic activation of MS cholinergic neurons triggers a spatially confined acetylcholine release along the axons on a matching timescale. Furthermore, optogenetic inactivation of the MS cholinergic projections to the CA1 selectively impaired integration accuracy when applied at the onset of integration, but had no effect during the reward-approach period. Collectively, these findings demonstrate that MS cholinergic signaling in the CA1 is temporally precise and spatially restricted, and is indispensable for accurate path integration.

go to top


 

MEMORY


Thalamic Input Modulates Hippocampal Ripple Timing During Closed-Loop Sleep Stimulation: Insights from a Biophysical Model
Mingxiao Wei,
University of California, San Diego
Abstract: Sleep supports memory consolidation through coordinated interactions among brain rhythms, including slow oscillations (SOs), spindles, and sharp wave-ripples (SWRs) in the cortex, thalamus, and hippocampus. Empirical studies across species have shown that the spectral power, density, and temporal coupling of these rhythms positively correlates with improved memory consolidation. Recent studies have demonstrated that closed-loop auditory stimulation (CLAS), precisely timed to the SO phase, could increase power in the SO and spindle bands and can induce SWRs with minimal latency, thereby improving memory performance through enhanced triple-phase locking of the three rhythms. In principle, this permits experimental control over the timing of SWRs relative to the phase of the SO. However, relatively little attention has been given to the precise temporal ordering of these events or the circuit-level mechanisms mediating these effects.

To address this, we employed a large-scale biophysical model of the hippocampal-thalamic-cortical loop, incorporating spiking neuron populations in the cortex, thalamus (including the reticular, medial dorsal, and reuniens nuclei), and hippocampal subfields CA1 and CA3. Neural dynamics are modeled using detailed Hodgkin-Huxley neuron models in the thalamus and cortex, and computationally efficient integrate-and-fire models for the hippocampus. The model can reproduce SO, ripple, and SWR events during slow-wave sleep.

Critically, we introduced a reuniens nucleus-to-CA1 projection, an anatomically supported but underexplored pathway. Simulating CLAS by targeting thalamic relay neurons at specific SO phases reproduced key experimental findings: an immediate increase in ripple power following stimulation, followed by sustained suppression. Removal of the thalamus-to-CA1 connection abolished this pattern, highlighting its necessity in coordinating SO-SWR dynamics. Our results suggest that reuniens-mediated input modulates hippocampal responses to sensory stimulation during sleep. Beyond explaining recent experimental results, this model provides a platform for predicting how changes in connectivity within the hippocampal-thalamic-cortical loop affect sleep dynamics and memory consolidation outcomes.

Linking structural brain changes to age-related slow-wave sleep alterations: mechanistic insights form a whole-brain thalamocortical model of human sleep.
Maria Gabriela Navas Zuloaga, University of California, San Diego
Abstract: Slow-wave sleep (SWS), a dominant brain state during non-rapid eye movement (NREM) sleep, is essential for memory consolidation. Its hallmark, the slow oscillation (SO, <1 Hz), reflects alternating active (“up”) and silent (“down”) states in the thalamocortical network. Sleep EEG recordings have shown as association between aging and disrupted SO properties, such as reduced amplitude and density. However, the link between age-related structural brain changes, like cortical thinning and white matter loss, and these electrophysiological alterations remains unclear. To address this, we developed a multi-scale, whole-brain thalamocortical network model of SWS with realistic human cortical connectivity and modified it to simulate aging. The model includes 10,242 cortical columns spanning one hemisphere, each comprising six layers with populations of spiking pyramidal (PY) and inhibitory (IN) neurons. It also features a thalamic module with biophysically grounded thalamocortical and reticular neurons. Long-range cortical connectivity is derived from diffusion MRI tractography data from the Human Connectome Project, with distance-dependent synaptic delays. A myelination-based hierarchical organization of connections determines laminar connectivity. We simulated aging as a progressive weakening or loss of synaptic connections. The model reproduced empirically observed age-related SO alterations, including reduced amplitude, density, and slope, alongside increased duration. Importantly, these effects were best captured when selectively degrading PY-PY, but not PY-IN, connections, pointing to a shift in excitation-inhibition balance as a driver of altered sleep dynamics in aging. These results provide mechanistic insight into how structural brain aging degrades the neural substrate of sleep and highlight circuit-level targets for interventions to protect sleep quality and memory across the lifespan.

Hippocampal indexing alters the stability landscape of synaptic weight space allowing life-long learning
Ryan Golden,
University of California, San Diego
Abstract: Systems-level consolidation holds that the hippocampus rapidly encodes new information during wakefulness, and that coordinated cortico-hippocampal replay during subsequent sleep transfers and stabilizes those traces in cortex. This idea captures key learning principles, but exactly how replay reshapes the synaptic-weight landscape - creating new representations while preserving old ones - remains unclear. To address this, we used a biophysically realistic network model to probe the effects of slow-wave sleep (SWS) on synaptic-weight space. We show that previously learned memories are stable attractors in that space, and that hippocampus-driven interactions between sharp-wave ripples and cortical slow waves push the system into new attractor states that jointly encode old and new memories. As a result, replay allows recently acquired information to be incorporated without degrading prior memories. Our results offer a novel mechanistic - and conveniently “geometric” - framework for understanding how sleep-driven replay sculpts synaptic weights during consolidation.

go to top


 

VISION


Factoring Multi-Channel LFPs into Interpretable Components using Locally-Competitive Sparse Coding
Garrett Kenyon,
Los Alamos National Laboratory
Abstract: Technology now exists to record local field potentials (LFPs) from multiple sites simultaneously across large areas of the brain. In order to interpret such recordings, it is necessary to first reduce their dimensionality. Typically, dimension reduction is accomplished via extensive spatial and/or temporal averaging but critical stimulus information encoded in the fine grained structure of LFP may be lost. Here, we present a technique based on sparse coding using the Locally Competitive Algorithm (LCA) for resolving multichannel LFP data into a tractable number (256) of generative components that retain non-trivial structure across spatiotemporal scales. To train our model, we use a 150min subset of the Allen Brain Observatory Neuropixels Visual Coding LFP dataset, segmented into ~160000, non-overlapping 33ms intervals sampled at 1250Hz (32 time steps) across 22 channels separated by 20μm within the mouse primary visual cortex (VISp). An LCA model with 256 nonconvolutional feature patches, each spanning the entire spatiotemporal LFP block (32 x 22), was then optimized for sparse reconstruction. A representative example of a sparse reconstruction of an LFP block exhibits substantial denoising, an inherent property of locally competitive sparse coding. Learned features moreover factor the LFP data into functional ensembles, several examples of which are shown (Fig 1, right). As is apparent from inspection, these learned features exhibit non-trivial spatiotemporal structure that may in turn encode non-trivial information about visual stimuli, a hypothesis we are currently investigating.

Functional mapping of neurons in primary visual cortex using computational barcodes
Isabel Fernandez,
University of Maryland, College Park
Abstract: Visual perception relies on a hierarchy of visual processing that transforms raw visual input into meaningful representations. A fundamental question we address here is how to understand the contribution of individual neurons in the visual system to this process. Traditionally, functional descriptions of visual neurons have been based on the visual feature(s) that the neuron best responds to, typically summarized by tuning curves or receptive fields. However, such characterizations of neural function can be limiting in more natural visual contexts, where receptive-field-based models often struggle to effectively predict neuron responses, even for neurons in early cortical stage of processing: the primary visual cortex (V1). Recently, models based on deep neural networks (DNN) have been shown to better predict V1 responses and likewise provide a more faithful representation of their role in visual processing. However, they cannot provide an interpretable description of a given neuron’s function, due to their depth and complexity. Here we present a variant of a DNN-based model that provides an interpretable framework based on the concept of computational barcodes, which are unique identifiers of each neuron based on the computation that it performs; such that computational barcodes of neurons with the same computational role in visual processing will have the same barcode, and likewise, neurons that share computations (e.g., within a neural circuit) will have similar barcodes. To evaluate this approach, we used synthetic data produced by a large-scale spiking neural network model of V1 with biologically realistic connectivity and cell types that can reproduce many known classical and extra-classical properties. We show that, for certain models with biological constraints, this barcode approach can distinguish different cell types (layer 4 versus 2/3, excitatory versus inhibitory) based on unsupervised clustering in a “barcode space”. We further validate this framework using macaque V1 recording. We propose computational barcodes as a new alternative to feature-based descriptions of neural function that can be used to understand visual neuron function at a population level over hierarchical visual computations. This approach provides a path towards a comprehensive understanding of the architecture of the visual cortex directly tied to visual function in natural visual contexts.

The structure of multi-area population activity in the macaque visual cortex
Anna Jasper
, Albert Einstein College of Medicine
Abstract: Most brain functions require coordination of neuronal population activity, which is distributed both within and across different networks or brain areas. Decades of work has provided a rich understanding of how moment-by-moment fluctuations in neuronal activity are shared among neurons within a single brain area. In contrast, how activity is structured across multiple (more than two) brain areas is essentially unknown. We recorded neuronal population spiking activity simultaneously in three early and midlevel cortical areas (V1, V2 and V3d) of the primate visual system. We analyzed the measured responses with Group Factor analysis (GFA), a linear dimensionality reduction approach which decomposes the responses into population activity patterns that are shared among neurons within one area, and across two or more areas. This analysis revealed the structure of multi-area population activity has several notable features. First, much of the activity within each area was shared among neurons in that area, but not with neurons in other visual areas. Thus, much cortical activity is ‘private’ to each area, although the areas we sampled are richly inter-connected. Second, each pairing of visual areas shared distinct population activity fluctuations or, equivalently, interacted through distinct communication subspaces. This finding indicates that communication spaces allow for distinct signal sharing between an area and its different partners. Finally, we found that some population activity was shared across all three areas, though most of these multi-network activity patterns were expressed unequally across each area. This work thus provides the first description of the structure of cortical population activity that is distributed across multiple brain areas, revealing the degree to which cortical areas operate independently, communicate selectively with each other, and coordinate their activity to form larger multi-area ensembles.

Spatiotemporal localization of category-specific gamma responses in the fusiform gyrus
Ziwei Li
, Washington University in St Louis
Abstract: Previous studies have identified brain regions involved in the processing of specific visual stimuli, such as the fusiform face area, fusiform body area, and visual word form area. Others found overlapping anatomical regions that respond to different categories of visual stimuli, suggesting a distributed network rather than only a specialized brain region for visual processing. We hypothesize that domain-specific and domain-generalizing regions exist in the fusiform gyrus and that they are characterized by distinct latencies. Broadband gamma (BBG) activity recorded using SEEG provides a direct measure of local neuronal firing and can be used to localize task-related cortical activity. Thirty human subjects implanted with SEEG electrodes participated in a rapid serial visual presentation study in which stimuli were randomly selected from ten categories (faces, bodies, scenes, objects, scrambled objects, line drawings, character strings, words, digits, and pseudowords). Each image was presented for 200 ms with an 800 ms interstimulus interval. Visual attention was assured through a 1-back task. We compared the coherence of BBG between pre- and post-stimulus durations to identify statistically significant electrode locations. Electrode locations were transformed into MNI space and statistically significant electrode locations were mapped onto a 1×1×1 mm rastered response matrix (Gaussian kernel σ = 2 mm). Spatial similarity between stimulus categories was computed as the dot product of Gaussian-blurred matrices, forming a response similarity matrix. Hierarchical clustering revealed three functional clusters in the left fusiform gyrus: graphical, lexical, and numerical. Domain-specific (responsive to a single cluster) regions are located more anteriorly, laterally, and inferiorly compared to domain-general regions (responsive to multiple clusters). Regions that respond to multiple stimulus groups have higher broadband high-frequency responses during the first 100 ms than those that process specific stimulus groups.

Cell-type specific novelty encoding in cortical circuits
Hannah Choi
, Georgia Institute of Technology
Abstract: Understanding the cell-type-specific responses of cortical neurons during sensory tasks is crucial for unraveling the neural mechanisms underlying perception and behavior. Recent experimental studies have demonstrated distinct response patterns among excitatory neurons, vasoactive intestinal peptide-expressing (VIP) interneurons, and somatostatin-expressing (SST) interneurons in the mouse primary visual cortex (V1) during visual change detection. Notably, excitatory and VIP neurons exhibit stronger responses to novel stimuli, while SST neurons respond more to familiar stimuli. Additionally, VIP neurons show a ramping activity during unexpected stimulus omissions, and both VIP and excitatory neurons increase their responses when a stimulus change occurs. In this study, we develop a computational model based on predictive coding theory that replicates these experimental findings by assigning key algorithmic nodes to specific neural populations while accounting for their cell-type properties. Our model integrates both perceptual objectives—prediction error minimization and energy efficiency—and a behavioral objective via reinforcement learning (RL). Training with perceptual objectives alone reproduces the absolute novelty effects in excitatory, VIP, and SST neurons as well as the omission response in VIP cells, but fails to capture the contextual novelty effects in excitatory and VIP neurons. Incorporating the RL objective and reward-based modulation is necessary to capture the increased responses of these neurons during stimulus change. Finally, by testing a baseline model, we show that while response adaptation and Hebbian learning could capture contextual novelty effects, they fail to replicate absolute novelty and omission responses. Our findings suggest that a combination of predictive coding, energy efficiency, and reinforcement learning is necessary to explain the complex cell-type-specific responses in mouse V1. This work provides a possible computational mechanism through which these objectives interact during novelty encoding while suggesting a biologically plausible mapping between the algorithmic nodes of predictive coding and individual interneuron populations in the cortical microcircuit.

Synchronous spiking in the corticothalamic circuit
Nicholas Priebe, UT Austin
Abstract: Responses of neurons in sensory areas are variable. Understanding how this variability is correlated between neurons both within and across brain areas, and as a function of the relative tuning properties of these neurons, has vast implications for the study of sensory coding. We have previously shown that in primary visual cortex (V1), correlated activity at specific timescales across the neocortical population causes variable spiking responses in neurons within the neocortical circuit [1]. Here we ask whether the correlated activity, or synchrony, in the neocortex is inherited from thalamic inputs or emerges de novo in the neocortical circuit. We explore the physiological basis for cortical correlated activity by recording simultaneously from populations of neurons in the lateral geniculate nucleus (LGN) of the thalamus and its target, V1.

To compare the synchrony in the LGN and V1, we recorded the spiking responses of tens of neurons simultaneously with neuropixels probes in the LGN of awake mice. We find synchrony is present in LGN, but is weaker than in V1. We separated LGN spikes fired in the burst versus tonic modes and found that spikes fired in the burst mode were more synchronous than tonic spikes.

Synchronous spiking could exist independently for the LGN and V1, or be correlated across brain regions. From our simultaneous recordings we find that synchrony is shared between LGN and V1, and the strength of this synchrony depends on the receptive field similarity of neurons in each region.

LGN and V1 are tightly coupled in a bidirectional circuit. Synchrony in V1 could be inherited from LGN projections. In addition, V1 may drive synchrony in LGN through corticothalamic projections. To determine whether corticothalamic feedback contributes to LGN synchrony, we suppressed V1 excitatory activity by stimulating parvalbumin-positive interneurons optogenetically, and uncovered a profound reduction in the magnitude and timescale of LGN synchrony, and a suppression of LGN bursts. Our results indicate that corticothalamic drive critically sculpts LGN synchronous activity.

By recording simultaneously from LGN and V1 populations, we have shown that synchronous spiking activity is shared across these regions, with strength depending on tuning similarity. Synchrony in the visual pathway does not emerge in an exclusively feedforward manner, as silencing corticothalamic projections reduces the strength of LGN synchrony. Our results inform an understanding of the physiological basis for variable neural responses by showing how synchronous activity is shared across visual areas in the awake mouse.

go to top


 

SENSORI-MOTOR / DECISION MAKING


Data-Driven Feature Extraction and Stability of ECoG Speech and Hand Motor Decoding in an ALS Patient
Dean J. Krusienski, Virginia Commonwealth University
Abstract: Recent studies have shown significant promise toward the development of speech neuroprostheses using intracranial signals. These advances have made chronic neural recording in clinical populations both feasible and informative for long-term decoding studies. As an extension of our intracranial data collected under the CRCNS award, we examine approximately 2.5 years of neural data from a clinical trial involving an individual with progressive ALS, performing established speech and hand grasp tasks. The participant was implanted with two electrocorticographic (ECoG) arrays over the sensorimotor cortex, targeting regions associated with speech and upper-limb motor control. Such rare longitudinal ECoG data from an ALS patient provides the unique opportunity to evaluate signal and decoding stability over time, as well as train data-hungry deep-learning models for data-driven feature characterization. Preliminary analysis indicates that bandpower in the high beta (21-30 Hz) range provides more stable decoding performance over time compared to the conventional high gamma (70-170 Hz) for both hand grasp and speech activity detection. To further explore the relevant feature space, we employed the well-established EEGNet deep-learning architecture for neural signals. Individual EEGNet models were trained to classify action versus idle states using raw ECoG data from grasp and speech tasks, both independently and jointly. The learned convolutional filters were examined to identify interpretable spectral and spatial features associated with each modality. The resulting filters revealed some expected spatio-temporal patterns including activity resembling local motor potentials over hand areas for the hand movements and broadband gamma over more diffuse regions for speech production. Additionally, unexpected patterns were observed that warrant further investigation, such as low-frequency ventral motor activations for hand movements and distinct gamma-range spectral peaks for the combined models. These findings aim to elucidate the shared and distinct neural substrates underlying speech and grasp tasks, while providing new insights into the spatial and spectral characteristics of relevant features for neural decoding.

Contributions of the subthalamic nucleus to reward-biased perceptual decision making in monkeys
Long Ding,
University of Pennsylvania
Abstract: The subthalamic nucleus (STN) is a part of the indirect and hyperdirect pathways in the basal ganglia (BG) and has been implicated in movement control, impulsivity, and decision-making. We recently demonstrated that, for perceptual decisions, the STN includes at least three subpopulations of neurons with different decision-related activity patterns. Here we show that, for decisions that depend on both noisy sensory evidence and reward expectations, many STN neurons are sensitive to both evidence and reward-related factors. Within a drift-diffusion framework, the three STN subpopulations show different relationships to model components reflecting formation of the decision variable, dynamics of the decision bound, and non-decision-related processes. The subpopulations also differ in their representations of quantities related to decision evaluation, such as expected accuracy and reward. These results suggest that the STN plays multiple roles in decision formation and evaluation to guide complex decisions that combine multiple sources of information.

Small adjustments in mechanotransduction currents drive receptive field sizes of single afferents and discrimination of monofilaments by populations
Gregory J. Gerling,
Systems & Information Engineering, University of Virginia
Abstract: A diverse population of mechanosensitive afferents in the skin informs our sense of touch. While empirical measurements can be made from single afferent units using microneurography, the derivation of a population response, and connections between disparate measurement modalities, require computational approaches. Prior models have tended to be stimulus-dependent and, at least in part data-driven, which can hinder their ability to predict neural responses to untrained stimuli. Moreover, most prior models have required precise knowledge of contact relative to receptive field center. This is unrealistic, in that a neuron does not know the relative location of the stimulus, but only responds to a spatial pattern of stress and/or strain near its receptive field. This talk will describe a biophysical model designed to predict the firing responses of both single-unit mechanoreceptive afferents, and populations of afferents, in response to thin, low-force monofilaments indented into the human finger. As input, this effort takes a high-resolution imaging approach using 3D digital image correlation to measure the skin surface. Then, by making small adjustments in the parameters of the modeled biophysical neuron, it can mediate peak and steady-state firing properties and, notably, the size of receptive fields, highlighting how these factors are interrelated. Moreover, in varying these factors in concert with the density of the population of afferents, the model can differentiate the monofilament stimuli based on temporal patterns in the recruitment of receptive fields.

Goals shape dynamics of attention and selection for value-based decision-making
Amitai Shenhav
, UC Berkeley
Abstract: Humans can flexibly adjust how they make decisions to arbitrary goals. However, most theories in decision-making focus on predicting one specific choice type (i.e., choosing the best option). Here, we link decision-making and cognitive-control research to test a theory that accounts for flexible adjustments of choice mechanisms to different goals and demands. Our biologically inspired model specifies how different features translate into evidence for the current goal, and how evidence is mapped onto different output structures. We tested the model in an eye-tracking study in which participants were asked to choose one out of four consumer products or to appraise the entire set, each with respect to positive or negative value. The results confirmed our preregistered hypotheses that response time (RT) should decrease with the overall value of a set of options in choose-best but increase in choose-worst trials. As predicted, this interaction was absent in appraisal RT, which instead exhibited an inverted-U-shaped pattern. Furthermore, the amount of attention devoted to an option was positively related to its value in choose-best, negatively related in choose-worst trials, and unrelated when participants appraised entire sets of products. Time-resolved analyses of eye movements revealed strategic goal-dependent search processes, as attention is increasingly focused on goal-congruent options in choice but remains more uniformly distributed in appraisal. Our findings suggest that cognitive control shapes choice and search dynamics by flexibly adjusting them to current goals and demands.

Disinhibition versus Feed-Forward Suppression: Divergent Impacts on Odor Discrimination in an RNN Model of Locust Antennal Lobe
Shruti Joshi,
University of California, San Diego
Abstract: In the insect antennal lobe (AL), inhibitory circuits play an important role in shaping olfactory representations, yet the distinct contributions of different inhibitory pathways to circuit stability and function remain unclear. Here, we investigate how feed-forward (LN-PN) and recurrent (LN-LN) inhibition differentially regulate olfactory processing using a biologically constrained recurrent neural network (RNN) model of the locust AL (830 PNs, 300 LNs) trained on in vivo data.

Our model recapitulated key physiological features, including diverse temporal responses in projection neurons (PNs). Many PNs responded to odor onset, while others responded to odor offset. These offset responses were generated intrinsically by the recurrent inhibitory network, persisting even when absent from receptor neuron inputs. Global manipulations of inhibition revealed a remarkable stability in the mean PN firing rate, which was maintained by an antagonistic balance between direct LN-PN inhibition and LN-mediated disinhibition.

This firing rate stability, however, masked trade-offs in population coding. We found that the strength of inhibition directly modulated coding dimensionality. Weak inhibition compressed neural activity into a low-dimensional space that faithfully tracked receptor input and preserved stimulus decodability. In contrast, strong inhibition expanded the coding space but impaired odor discrimination accuracy. Pathway-specific manipulations isolated the functionally distinct roles of these inhibitory motifs. The temporal structure of odor responses was highly sensitive to LN-LN connectivity; perturbing mutual inhibition between LNs diminished offset responses while amplifying onset responses. Enhancing LN-LN inhibition expanded response dimensionality but destroyed stimulus information. Feed-forward LN-PN inhibition exerted more direct control over network activity, as amplifying this pathway suppressed both firing rates and dimensionality.

Together, these results dissociate the roles of key inhibitory pathways, indicating that recurrent LN-LN connections primarily shape the temporal dynamics and dimensionality of the olfactory code, while feed-forward LN-PN connections regulate overall network gain and stability.

Beyond the hyperdirect pathway: the hidden role of Arkypallidal neurons in stopping actions
Catalina Vich Llompart,
Universitat de les Illes Balears
Abstract: Being able to stop or adjust an action at the right moment, in response to an external stimulus, is a critical capacity for everyday life, from braking at a red light to holding back an impulsive response. This ability, known as reactive inhibitory control, depends on a set of interconnected brain regions collectively called the cortico-basal ganglia-thalamic (CBGT) network. For decades, it was believed that stopping actions relied mainly on a single route, the so-called hyperdirect pathway, which carries “stop signals” from the cortex straight into deeper brain structures. However, new discoveries have called this model into question. In particular, the external globus pallidus (GPe), traditionally thought of as a simple relay station within the indirect pathway, has been shown to play a much richer role than previously thought. We use a biologically inspired computational model of spiking neurons to test how these new findings reshape our understanding of inhibitory control. Our model includes a population of cells in the GPe called arkypallidal neurons (or Arky cells), which send signals back “upward” to the striatum, the main input hub of the basal ganglia. These ascending signals counterbalance the usual “downward” signals sent by another GPe cell type, the prototypical neurons. By simulating the effects of stop signal inputs to different experimentally-identified targets in the basal ganglia (the subthalamic nucleus, STN; the striatal indirect spiny projection neurons, iSPNs; and the Arky cells), we show that the effectiveness of stopping depends strongly on the contribution of Arky cells. In particular, we show that the subthalamic nucleus supports inhibition by transmitting signals through the GPe, and, when the Arky pathway is disrupted in combination with STN and/or iSPN activation, the ability to stop is markedly reduced.
These results suggest that reactive inhibitory control is not a simple one-way process, but instead relies on a delicate balance of ascending and descending signals within the CBGT network. In particular, Arky neurons act as key regulators, influencing how different populations of striatal neurons compete with each other when a stop signal arrives.

Functional reorganization of motor cortex subnetworks during naturalistic and trained behaviors
Noa Shmueli
, Tel Aviv University
Abstract: Goal-directed behavior relies on coordinated neural activity encoding multiple aspects of action selection, including monitoring of action outcomes. In mice, the anterior lateral motor (ALM) cortex is critical for planning and executing memory-guided licking behaviors. Although ALM has been extensively studied in trained animals, it remains unclear how its subnetworks are organized during naturalistic, untrained behaviors and how they reorganize with learning. While Hebbian learning predicts like-to-like connectivity, tuning and learning dynamics are highly heterogeneous, and the diversity of functional coupling among co-tuned subpopulations is poorly understood.We used two-photon calcium imaging of ~100,000 layer 2/3 excitatory neurons, combined with clustering and dimensionality reduction, to examine how ALM subnetworks encode target location, movement timing, and reward outcome in naïve mice. Unsupervised clustering revealed distinct neuronal groups with stereotypical tuning and stronger within-cluster functional connectivity during spontaneous activity (outside of task). Removing shared low-dimensional components uncovered enhanced within-cluster coupling and inhibitory interactions between oppositely tuned neurons, suggesting recurrent interactions masked by global signals. Functional coupling was strongest in a subpopulation encoding action outcomes. Longitudinal imaging during learning of a memory-guided decision-making task revealed both stable and learning-dependent changes, indicating partial network reorganization. A subset of co-tuned neurons maintained sustained post-choice selectivity – likely encoding action outcomes or choice memory – and this population expanded with learning and developed stronger functional connectivity. These results suggest that ALM subnetworks encode a structured representation of movement and outcome that is dynamically refined by experience, linking local recurrent interactions to the emergence of goal-directed behavior.

Octopamine and tyramine dynamics predict learning rate phenotypes during associative conditioning in honey bees
Brian Smith
, Arizona State University
Abstract: Biogenic amines are fundamental for physiological homeostasis and behavioral control in both vertebrates and invertebrates. Monoamine neurotransmitters released in target brain regions conjointly regulate adaptive learning and plasticity. However, our understanding of these multi-analyte mechanisms remains nascent, in part due to limitations in measurement technology. Here, during associative conditioning in honey bees, we concurrently tracked sub-second fluctuations in octopamine, tyramine, dopamine, and serotonin in the antennal lobe, where plasticity influences odorant representations. By repeatedly pairing an odorant with subsequent sucrose delivery, we observed individual differences in the conditioned response to odor, which occurred after a variable number of pairings (learners) or not at all (nonlearners). The distinction between learners and non-learners was reflected in neurotransmitter responses across experimental conditions. Remarkably, the speed of learning – the number of pairings prior to a proboscis extension reflex – could be predicted from monoamine opponent signaling (octopamine–tyramine), from both the first presentation of the odorant alone, prior to any pairing with sucrose, and from the first conditioned response to the odorant, coming after a number of sucrose pairings. These results suggest monoamine signaling phenotypes may relate directly to the now widely-reported socially-relevant genetic differences in honey bee learning. Future analyses will need to combine empirical measurement of multiple biogenic amines with evaluation of neural plasticity and computational modeling to more fully revel how opponent signaling works in the brain.

A Species-Specific Spinal Circuit for Coordinating Antagonist Motor Pools
Timothy Cope,
Georgia Institute of Technology
Abstract: Everyday motor tasks could not be accomplished efficiently without the coordinated activation of muscles that generate opposing forces at skeletal joints, i.e., muscle antagonists. Current understanding of underlying spinal circuits in mammals has long been driven by detailed examination of felines. However, the neural control of movement must be adapted to allometric, biomechanical, and behavioral differences among species. Here, we explore a short-latency spinal reflex pathway that promotes co-activation of antagonist motor pools in rats but not cats. In anesthetized adult rats, every motoneuron in the tibialis anterior – extensor digitorum longus (TA-EDL, ankle flexors) or medial gastrocnemius (MG, ankle extensor) motor pools produces an excitatory post-synaptic potential (EPSP) in response to stretch of the corresponding antagonist muscle. The pathway is oligosynaptic with EPSP latencies that average 5.53 ± 2.19 ms (n=65 motoneurons, MNs) in the extensor to flexor path and 4.06 ± 1.48 ms (n=42 MNs) in the flexor to extensor path. The average amplitudes of the antagonist EPSPs were similar (0.59 ± 0.3 mV, and 0.60 ± 0.4 mV), suggestive of strong pathways, ones in which muscle stretch overcomes anesthesia to drive multiple premotor interneurons. Comparisons of antagonist EPSPs elicited by muscle quick stretch vs vibration, demonstrate pathway activation by IA afferents together with group II and/or IB afferents. Incorporating this novel spinal circuitry into a model of rodent biomechanics preserves locomotor behavior and enhances co-activation of ankle flexors and extensors prior to stance, potentially addressing ankle torque differences between non-cursorial (rats) and cursorial (cats) animals. This approach provides insight into how neural systems adapt to biomechanical and behavioral variations across species.

Mesoscale dynamics of action-outcome representations across the cortex in naturalistic goal-directed behavior
Tal Chamilevsky
, Tel Aviv University
Abstract: Goal-directed behaviors involve action-selection and adjusting responses by observing action consequences. Here, we studied a novel naturalistic behavior consisting of multidirectional tongue-reaching movements towards a target to obtain a water reward (target presented on a grid of possible locations in front of the mouse’s face). This behavior did not require training and allowed studying the underlying neural activity before it is shaped by learning. Using mesoscale 2-photon calcium imaging, we recorded the activity of ~1,000,000 neurons across >10 cortical areas including motor, somatosensory, and high-order visual regions. Cells were tuned to target location, reward outcome, and action time across all areas tested, including conjunctive representation of all three variables. While target location selectivity was rather transient, the encoding of unexpected large reward persisted for >30 s across all cortical areas tested. Temporal dynamics within each cortical area showed rapid transitions between neuronal-ensembles encoding reward information. Specifically, reward modulation persisted on a population level, but was implemented dynamically by rapidly rotating dimensions in neural-activity space, that corresponded to specific neuronal ensembles with transient reward selectivity. The joint encoding of target location and reward modulation is consistent with a mathematical network model where these multivariate quantities are maximally mixed. We believe these long-lasting representations could indicate memory of action outcomes. We propose that the multiplexed nature of this memory could enhance its efficacy as a teaching signal for learning.

Dynamical sensory neuron force measurements improve performance of Carausius morosus robot to perturbations
Isabella Kudyba, West Virginia University
Abstract: To enable robust and adaptive standing and walking, animals have mechanosensors that measure resisted forces and enhance motor control. Mechanosensors in insects, campaniform sensilla (CS), are shown to respond dynamically to external forces, responding with higher frequencies when force increases and silencing when force decreases. The dynamic response of CS lets them act as a phase lead compensators, a type of high pass filter, for motor neuron activation. We hypothesize that the CS dynamic response allows insects to predict and quickly respond to changing environmental forces. How does the dynamic response of force sensors impact the ability of animals to control locomotion? In this study, we perturbed a robotic stick insect and measured resulting strains at locations of CS fields. We incorporated force feedback by applying different phase lead compensators modeled after stick insect CS recordings, a controller based on the viscoelasticity of the leg, and a control in which no force feedback was incorporated. The controllers’ success was measured by comparing the RMSE of the joint’s positions. This data is useful to understand how intricate dynamics of sensory neurons are beneficial to motor control in walking and standing. Additionally, by applying biological inspired controllers in robots, this work may improve the design of walking robots enabling them to adapt to environmental changes robustly.

go to top


 

COGNITION AND SLEEP


Modeling the role of infraslow oscillatory rhythms in NREM-REM sleep dynamics
Cecilia Diniz Behn
, Colorado School of Mines
Abstract: During sleep, the mammalian brain alternates between two major brain states, rapid eye movement (REM) and non-REM (NREM) sleep, which are characterized by differential release of neuromodulators such as noradrenaline (NE). Recent experimental advances with improved temporal resolution have refined our understanding of how noradrenergic locus coeruleus (LC) activity changes with behavioral state. Specifically, In NREM sleep, the firing activity of the LC and its expression of NE have been shown to be phasic and strongly reflect an infraslow (~50 s) rhythm. This rhythm is present in the electroencephalogram (EEG) as a modulation in the σ (10-15 Hz) power range known as the infraslow σ power (ISP) rhythm, and it is also reflected in firing activity of brainstem REM sleep-regulatory areas such as the dorsomedial medulla (dmM) and the periaqueductal gray (PAG). Importantly, transitions from NREM to REM sleep and spontaneous awakenings are synchronized with the ISP rhythm, suggesting that the rhythm plays a crucial role in shaping NREM-REM cycles and overall sleep architecture. We are developing computational models for REM-regulatory brainstem circuits, including LC, to analyze how infraslow rhythmic dynamics and slow ultradian processes that promote REM sleep interact to govern NREM-REM cycling and the temporal architecture of sleep.

Low-Dimensional Population Dynamics in Brainstem Gate REM sleep
Franz Weber
, University of Pennsylvania
Abstract: Rapid eye movement (REM) sleep is generated in the brainstem. Yet, the dynamics in neural brainstem populations driving transitions to REM sleep remain largely unknown. Combining Neuropixels recordings with dimensionality reduction, we found that the population activity in midbrain and pons is dominated by two components, one of which captures strong infraslow fluctuations in neural activity. During transitions from non-REM (NREM) to REM sleep, the population activity followed a stereotypic trajectory, preceded by an increase in the infraslow component. Our analysis revealed subpopulations of REM sleep-activated and -inhibited neurons across all areas with opposing infraslow dynamics and diverging ramping activity between REM episodes. Connectivity analysis identified antagonistic interactions between subpopulations with opposing infraslow tuning. Activation of REM-sleep promoting medullary neurons rapidly enhanced the infraslow component, whose strength gated the ability of upstream circuits to induce REM sleep. Collectively, our results identify a population-level mechanism gating REM sleep, suggesting that NREM-to-REM transitions are coordinated by low-dimensional, antagonistic brainstem dynamics.

Building and Updating Social Knowledge in Real-Time Learning
Gabriela Rosenblau
, George Washington University
Abstract: Humans acquire structured social knowledge through interactions. This structured knowledge enables generalizations across traits and preferences and across individuals. Despite a long tradition in social psychology of studying social knowledge structures or schemata (Fiske & Taylor, 2016), it remains unclear how humans represent and apply this knowledge to learn about others .Prior work has highlighted reference points (expectations from prior experience) and granularity (detail of representation) as key dimensions, but small samples focusing on item-level analyses only have limited our characterization of people types or social profiles during learning and whether these vary across cultures or instruments. Here, we collected large-scale trait and preference ratings from two independent samples: a discovery cohort of 1,530 English-speaking adults and a replication cohort of 938 participants from the US and European German-speaking countries. Data were analyzed using exploratory graph analyses (EGA) and network comparison tests to identify replicable latent dimensions. Preference data reduced to semantic categories, while trait dimensions converged with the Big Five and were externally validated with clinical questionnaires. Dimensional structures were robust across English- and German-speaking samples, as confirmed by invariance analyses. At the person level, latent profile analysis (LPA) was applied to the discovery sample, yielding four preference and trait profiles. XGBoost-based feature importance analyses indicated that one profile was most reliably recoverable in the replication sample. These findings provided the basis for prototype-based social learning tasks. In a behavioral experiment (N=40), participants learned about profiles from the discovery sample. Mixed-effects models revealed significant profile effects on learning trajectories. A second behavioral study (N=30) is underway to evaluate learning about the replicable profile across US and German participants. Computational modeling will formalize learning strategies, quantifying trade-offs between prior knowledge and learning. Finally, prototype learning tasks will be adapted for fMRI. Both univariate and multivariate approaches will assess how brain systems encode profiles and integrate prior expectations with learning. This integrative framework advances understanding of social knowledge structures at both dimensional and person levels, while providing shareable stimulus sets, analytic pipelines, and normative computational models for future research, including applications to clinical populations with social learning deficits.

go to top


 

MODELING / THEORY


Unifying equivalences across unsupervised learning, network science, and imaging/network neuroscience
Mikail Rubinov,
Departments of Biomedical Engineering, Computer Science, and Psychology, Vanderbilt University
Note: Not able to attend
Abstract: Modern scientific fields face the challenge of integrating a wealth of data, analyses, and results. We recently showed that a neglect of this integration can lead to circular analyses and redundant explanations. Here, we help advance scientific integration by describing equivalences that unify diverse analyses of datasets and networks. We describe equivalences across analyses of clustering and dimensionality reduction, network centrality and dynamics, and popular models in imaging and network neuroscience. First, we equate foundational objectives across unsupervised learning and network science (from kk-means to modularity to UMAP), fuse classic algorithms for optimizing these objectives, and extend these objectives to simplify interpretations of popular dimensionality reduction methods. Second, we equate basic measures of connectional magnitude and dispersion with six measures of communication, control, and diversity in network science and network neuroscience. Third, we describe three semi-analytical vignettes that clarify and simplify the interpretation of structural and dynamical analyses in imaging and network neuroscience. We illustrate our results on example brain-imaging data and provide abct, an open multi-language toolbox that implements our analyses. Together, our study unifies diverse analyses across unsupervised learning, network science, imaging neuroscience, and network neuroscience.

Using Ensemble Modeling to Predict Synaptic Connections and Neural Activity from Pretectal Responses to Optokinetic Stimuli
James Fitzgerald
, Northwestern University
Abstract: Quantitatively linking synaptic connectivity and neuronal activity is fundamental towards understanding the brain as a neural network. With recent advances in large-scale neural activity recordings and synapse-resolution circuit reconstructions, we can now generate many candidate models that link observed neural dynamics to synaptic connectivity. However, understanding how well various neural network model classes relate to and predict observed neural activity with structural connections remains a significant challenge in systems and computational neuroscience. Here we address this challenge in the context of optic flow processing in the pretectum of larval zebrafish, which is essential for animals to estimate self-motion and navigate their environment. We specifically combined an ensemble modeling framework and a function-linked (FuL) connectomics dataset, which combined calcium imaging of pretectal neurons during visual motion stimulation with electron microscopy-based reconstruction of synaptic connectivity, to predict circuit structure from function. Pretectal neurons were grouped into functional response types using several classification schemes. The retina-pretectum circuit was modeled as a recurrent neural network with feedforward retinal input that accounts for the fluorescence responses of the response types. We explored several methods for predicting synaptic connections and the resulting neuronal activity. Most simply, we optimized synaptic weights using L2-norm minimization (W-min). To evaluate the robustness of individual synaptic predictions, we also computed the critical weight norm (W-crit), which quantifies the consistency of each synapse’s weight sign across the ensemble of solutions by finding the smallest weight norm permitting synapse sign ambiguity. We then used similar methods to make activity predictions based on W-min or ensemble-modeling-based W-crit calculations. Activity predictions ranked by W-crit consistently identified accurate synaptic contributions among the top three to five candidates across all response types, outperforming W-min-based predictions. Overall, our approach yielded accurate functional predictions and generated proof-of-concept connectivity hypotheses that we are working to improve but can already be tested with the FuL connectomics dataset. These findings demonstrate the utility of ensemble modeling for linking structure to function in neural circuits and establish a framework for future structural validations. This new project first received CRCNS funding in 2025.

Advances in Dynamic Mode Decompositions for Modeling Unknown Dynamical Systems
Joel Rosenfeld
, University of South Florida
Abstract: We will discuss how to achieve a pointwise convergent DMD algorithm using Koopman generators (or Liouville operators). This will leverage a tool that embeds trajectory information into a function within a reproducing kernel Hilbert spaced called an occupation kernel. This talk will compactify the Koopman generator in two different ways, and will also introduce a new dynamic operator called Liouville weighted Composition Operators. These point-wise convergent results exemplify the advantages gained by taking this perspective over reproducing kernel Hilbert spaces.

Then we will give an abstract perspective on the Koopman based learning problem, where we will see that this shares many features with much older operator learning methods that go back at least as far as Weierstrass. Utilizing this new framework, we will investigate alternative approaches to learning dynamical systems with operators as well as examine operator based algorithms for resolving other inverse problems in AI and elsewhere.

Active Sensing in a Distorted World
Tatyana Sharpee
, Salk Institute for Biological Studies
Abstract: Perception does not provide a veridical copy of the external world; instead, it systematically distorts sensory information in ways that appear optimized for behavior. In this talk, I will examine several forms of perceptual distortion—focusing primarily on vision and proprioception—and show how they are coordinated to facilitate efficient movement under time constraints. Rather than being detrimental, these distortions enhance the brain’s ability to plan and execute actions when sensory processing and motor control must operate rapidly and with limited resources.
From a theoretical perspective, principles of optimal control predict that movement trajectories follow geodesics in hyperbolic spaces, reflecting the curved structure of the underlying control manifold. When this framework is applied to spatial perception, empirical analyses reveal that the brain’s internal manifold of perceived space expands with experience and learning. This expansion enhances sensitivity along task-relevant dimensions, suggesting that the geometry of perception is both adaptive and fundamentally non-Euclidean.

Planar, Spiral, and Concentric Traveling Waves Distinguish Cognitive States in Human Memory
Anup Das
, Columbia University
Abstract: A fundamental challenge in neuroscience is explaining how widespread brain regions flexibly interact to support behaviors. We hypothesize that traveling waves of oscillations are a key mechanism of neural coordination, such that they propagate across the cortex in distinctive patterns that control how different regions interact. To test this hypothesis, we used direct brain recordings from humans performing multiple memory experiments and an analytical framework that flexibly measures the propagation patterns of traveling waves. We found that traveling waves propagated along the cortex in not only plane waves, but also spirals, sources and sinks, and more complex patterns. The propagation patterns of traveling waves correlated with novel aspects of behavior, with specific wave shapes reflecting particular cognitive processes and even individual remembered items. Our findings suggest that large-scale cortical patterns of traveling waves reveal the spatial organization of cognitive processes in the brain and may be relevant for neural decoding.

go to top


 

NEUROMORPHIC / TECHNOLOGY


Neuromorphic Simulation of Drosophila Melanogaster Brain Connectome on Loihi 2
Felix Wang
, Sandia National Laboratories
Abstract: We demonstrate the first-ever nontrivial, biologically realistic connectome simulated on neuromorphic computing hardware. Specifically, we implement the whole-brain connectome of the adult Drosophila melanogaster (fruit fly) from the FlyWire Consortium containing 140K neurons and 50M synapses on the Intel Loihi 2 neuromorphic platform. This task is particularly challenging due to the characteristic connectivity structure of biological networks. Unlike artificial neural networks and most abstracted neural models, real biological circuits exhibit sparse, recurrent, and irregular connectivity that is poorly suited to conventional computing methods intended for dense linear algebra. Though neuromorphic hardware is architecturally better suited to discrete event-based biological communication, mapping the connectivity structure to frontier systems still faces challenges from low-level hardware constraints, such as fan-in and fan-out memory limitations. We describe solutions to these challenges that allow for the full FlyWire connectome to fit onto 12 Loihi 2 chips. We statistically validate our implementation by comparing network behavior across multiple reference simulations. Significantly, we achieve a neuromorphic implementation that is orders of magnitude faster than numerical simulations on conventional hardware, and we also find that performance advantages increase with sparser activity. These results affirm that today's scalable neuromorphic platforms are capable of implementing and accelerating biologically realistic models --- a key enabling technology for advancing neuro-inspired AI and computational neuroscience.

PARADIGM: Programmable, Analog, and Reconfigurable Active Dendrites Implementing Gain Modulation
Suma Cardwell
, Sandia National Laboratories
Abstract: PARADIGM aims to investigate the critical role of dendrites in neuronal processing by translating dendritic functionality into neuromorphic analog dendrites. Active dendritic processing is a fundamental characteristic of biological neurons that we hypothesize must be effectively emulated in silicon to harness the brain-like computational power and efficiency on a chip. Gain modulation is a fundamental computational principle in the central nervous system. Our objective is to implement biologically inspired programmable, analog, and reconfigurable active dendrites in neuromorphic hardware to effectively perform gain modulation.

To implement neuromorphic dendrites, we explore circuits using subthreshold analog field effect transistors (FETs) and ferroelectric field effect transistors (FeFETs), emerging devices known for their fast operation speeds, high density, low power consumption, and non-destructive readout capabilities. While both analog transistors and FeFETs have previously been utilized to model biological neurons and synapses, this project will investigate their application in modeling biological dendrites for gain modulation mechanisms. By investigating the diverse mechanisms of active dendritic processing, the project aims to uncover insights into how biological neurons achieve complex signal integration and processing. By advocating for a dendrite-centric neuromorphic paradigm, the project challenges traditional models of neuromorphic computation and proposes a more biologically relevant framework that could enhance the computational expressivity and efficiency of neuromorphic systems, beyond what is possible in equivalent conventional artificial neural networks (ANNs) or spiking neural networks (SNNs).

The anticipated outcomes may result in significant advancements in energy efficient computing. This interdisciplinary effort will require co-design and collaboration, ultimately contributing to advancements in understanding and replicating brain-like processing, improved computational models for neural function with applicability from edge computing, artificial intelligence/machine learning (AI/ML), and bio-inspired algorithms.

go to top