Search    
Find    

Qualcomm/Brain Corporation/INC Computational Neuroscience Seminar Series

The Institute maintains a lively seminar series that has brought to the campus distinguished researchers working at the forefront of neural computation. The seminar series program, held weekly, attracts an audience from the campus, industry, and the general public. Due to the widespread popularity of this program, INC now videotapes the lectures and maintains a videotape library for its members' use. Over 70 lectures are on file in the INC library.

If you would like to subscribe to the INC Seminar/Talks Mailing list click here...


Matthias Kümmerer: Jumping ahead from 1/3 to 1/2 of understanding image based saliency (05/11/2015)

http://inc.ucsd.edu http://www.qualcomm.com

Affiliation:

Werner Reichardt Centre for Integrative Neuroscience
Max Planck Institute for Biological Cybernetics, Tübingen
http://bethgelab.org/


Date: Monday, May 11, 2015

Time: 11:00am - Noon

Location: Fung Auditorium, Powell-Focht Bioengineering Building, University of California San Diego (Map)

 

Title: Jumping ahead from 1/3 to 1/2 of understanding image based saliency

Abstract: Among the wide range of complex factors driving where people look, the properties of an image that are predictive for fixations under free viewing conditions have been studied most extensively. Here we frame saliency models probabilistically as point processes, allowing the calculation of log-likelihoods and bringing saliency evaluation into the domain of information theory. We compare the information gain of all high-performing state-of-the-art models to a gold standard and find that only one third of the explainable spatial information is captured. Thus, contrary to previous assertions, purely spatial saliency remains a significant challenge. Our probabilistic approach also offers a principled way of understanding and reconciling much of the disagreement between existing saliency metrics. Finally, we present a novel way of reusing existing neural networks that have been pre-trained on the task of object recognition in models of fixation prediction. Using the well-known network "AlexNet" developed by Krizhevsky et al., 2012, we come up with a new saliency model, "Deep Gaze I", that accounts for high-level features like objects and popout. It significantly outperforms previous state-of-the-art models on the MIT Saliency Benchmark and explains more than half of the explainable information.
Joint work with Tomas Wallis, Lucas Theis, and Matthias Bethge

 

Host: Terry Sejnowski, terry@salk.edu

 

 

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu
Institute of Engineering in Medicine: http://iem.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Surya Ganguli: The Functional Contribution of Synaptic Complexity to Learning and Memory (04/06/2015)

http://inc.ucsd.edu http://www.qualcomm.com

Affiliation:
Neural Dynamics and Computation Lab
Stanford University
http://ganguli-gang.stanford.edu/

Date: Monday, April 6, 2015

Time: 4:00pm-5:00pm

Location: Fung Auditorium, Powell-Focht Bioengineering Building, UC San Diego (Map)

 

Title: The Functional Contribution of Synaptic Complexity to Learning and Memory

Abstract: An incredible gulf separates theoretical models of synapses, often described solely by a single scalar value denoting the size of a postsynaptic potential, from the immense complexity of molecular signaling pathways underlying real synapses. To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states. Moreover, theoretical considerations alone demand such an expansion; network models with scalar synapses assuming finite numbers of distinguishable synaptic strengths have strikingly limited memory capacity. This raises the fundamental question, how does synaptic complexity give rise to memory? To address this, we develop new mathematical theorems elucidating the relationship between the structural organization and memory properties of complex synapses that are themselves molecular networks. Moreover, in proving such theorems, we uncover a framework, based on first passage time theory, to impose an order on the internal states of complex synaptic models, thereby simplifying the relationship between synaptic structure and function.

We also apply our theories to model the time course of learning gain changes in the rodent vestibular oculomotor reflex, both in wildtype mice, and knockout mice in which cerebellar long term depression is enhanced; our results indicate that synaptic complexity is necessary to explain diverse behavioral learning curves arising from interactions of prior experience and enhanced LTD.

 

 

Host: Angela Bruno, ambruno@ucsd.edu

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu
Institute of Engineering in Medicine: http://iem.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Geoffrey Hinton: "Dark Knowledge" (10/08/2014)

http://inc.ucsd.edu http://www.qualcomm.com

Affiliation:
Computer Science Department, University of Toronto, Canada
Distinguished Researcher, Google Inc.

Date: Wednesday, October 08, 2014

Time: 4:00pm-5:00pm

Location:
University of California, San Diego
San Diego Supercomputer Center (Auditorium- Room B211)
10100 John Jay Hopkins Drive
San Diego, CA 92093-0523

 

Title: "Dark Knowledge"

Abstract: A simple way to improve classification performance is to average the predictions of a large ensemble of different classifiers. This is great for winning competitions but requires too much computation at test time for practical applications such as speech recognition. In a widely ignored paper in 2006, Caruana and his collaborators showed that the knowledge in the ensemble could be transferred to a single, efficient model by training the single model to mimic the log probabilities of the ensemble average. This technique works because most of the knowledge in the learned ensemble is in the relative probabilities of extremely improbable wrong answers. For example, the ensemble may give an image of a BMW a probability of one in a billion of being a garbage truck but this is still far greater (in the log domain) than its probability of being a carrot. This "dark knowledge", which is practically invisible in the class probabilities, defines a similarity metric over the classes that makes it much easier to learn a good classifier.

I will describe a new variation of this technique called "distillation" and will show some surprising examples in which good classifiers over all of the classes can be learned from data in which some of the classes are entirely absent, provided the target probabilities come from an ensemble that has been trained on all of the classes. I will also show how this technique can be used to improve a state-of-the-art acoustic model and will discuss its application to learning large sets of specialist models without overfitting. This is joint work with Oriol Vinyals and Jeff Dean.

 

Host: Terry Sejnowski

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Jonathon Shlens: "Engineering a Large Scale Vision System by Leveraging Semantic Knowledge" (10/28/2013)

http://inc.ucsd.edu http://www.qualcomm.com

Affiliation:
Google Research
http://research.google.com/search.html#q=shlens

Date: Monday, October 28, 2013

Time: 4:00pm-5:00pm

Location: Fung Auditorium, Powell-Focht Bioengineering Building, UC San Diego (Map)

 

Title: Engineering a Large Scale Vision System by Leveraging Semantic Knowledge

Abstract: Computer-based vision systems are increasingly indispensable in our modern world. Modern visual recognition systems have been limited though in their ability to identify large numbers of object categories. This limitation is due in part to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows unbounded. One remedy is to leverage data from other sources – such as text data – both to train visual models and constrain their predictions. In this talk I will present our recent efforts at Google to build a novel architecture that employs a deep neural network to identify visual objects employing both labeled image data as well as semantic information gleaned from unannotated text. I will demonstrate that this model matches state-of-the-art performance on academic benchmarks while making semantically more reasonable errors. Most importantly, I will discuss how semantic information can be exploited to make predictions about image labels not observed during training. Semantic knowledge substantially improves "zero-shot" predictions achieving state-of-the-art performance on predicting tens of thousands of object categories never previously seen by the visual model.

Host: Gabriel Silva, gsilva@ucsd.edu

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu
Institute of Engineering in Medicine: http://iem.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Michael Schmuker: "Multivariate Data Classification on Neuromorphic Hardware" (10/14/2013)

http://inc.ucsd.edu http://www.qualcomm.com

Affiliation:
Bernstein Center for Computational Neuroscience Berlin
http://biomachinelearning.net/

Date: Monday, October 14, 2013

Time: 4:00pm-5:00pm

Location: Fung Auditorium, Powell-Focht Bioengineering Building, UC San Diego (Map)

 

Title: Multivariate Data Classification on Neuromorphic Hardware

Abstract: Computational neuroscience has uncovered a number of computational principles employed by nervous systems. At the same time, recent neuromorphic hardware provides a fast and efficient substrate for implementations of complex neuronal networks. The current challenge for practical neuromorphic computing applications lies in the identification and implementation of functional algorithms solving real-world computing problems. Taking inspiration from the olfactory system of insects we constructed a generic spiking neural network for the classification of multivariate data, a common problem in signal and data analysis. Our network combines the parallel processing of multiple input dimensions, their decorrelation through lateral inhibition, and supervised learning of data classification. The network runs on an accelerated mixed-signal neuromorphic hardware system. When challenged with real world data sets the network achieves classification performance on the same level as a Naive Bayes classifier. Analysis of the network dynamics shows that stable decisions in output neuron populations are reached within less than 100ms of biological time, which compares well to the time-to-decision reported for the insect nervous system. The network tolerates the variability of neuronal transfer functions and trial-to-trial variation that is inevitably present on the hardware system. Our work provides a proof of principle for the successful implementation of a functional spiking neural network on a configurable neuromorphic hardware system that can readily be applied to real-world computing problems.

 

Bio: Michael Schmuker has studied biology and computer science in Freiburg, Germany and Montpellier, France. In 2003 he started a PhD in Cheminformatics (specialization on the chemical space of odorants) with Gisbert Schneider in Frankfurt, Germany. He then went on to do a postdoc in Neuroscience with Randolf Menzel in Berlin in 2007. In 2010, he started another postdoc with Martin Nawrot in Berlin in Theoretical Neuroscience and Neuroinformatics. Currently, Michael is a PI in the Bernstein Center for Computational Neuroscience Berlin. The focus of his work lies on sensory computation in the olfactory system, and brain-derived networks for functional neuromorphic applications.

 

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu
Institute of Engineering in Medicine: http://iem.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Jürgen Schmidhuber: "Universal Artificial Intelligence and Formal Theory of Fun" (07/26/2013)

http://inc.ucsd.edu http://www.qualcomm.com

Affiliation:
http://www.idsia.ch/~juergen/

Date: Friday, July 26, 2013

Coffee reception: 9:30am-10:00am

Lecture: 10:00am-11:00am

 

Location:
Irwin M. Jacobs Qualcomm Hall
5775 Morehouse Drive
San Diego, CA 92121

 

This is a free event with free parking

 

Title: "Universal Artificial Intelligence and Formal Theory of Fun"

Abstract: Universal self-improving AIs can rewrite their own software in a provably optimal way. They may not only solve externally posed tasks, but also their own self-invented tasks, to better understand the world, in line with Schmidhuber's simple Formal Theory of Fun and Creativity, which explains science, art, music & humor. The tools for implementing such AIs that include the largest, evolved, vision-based neural network (NN) controllers to date will be described, as well as gradient-based fast, deep/recurrent NNs which have won many recent international pattern recognition competitions.

 

Bio: Professor Jürgen Schmidhuber is with the Swiss AI Lab IDSIA & USI & SUPSI (ex-TUM CogBotLab & CU). Since age 15 his main scientific ambition has been to build an optimal scientist, then retire. This is driving his research on self-improving Artificial Intelligence. His team won many international competitions and awards, and pioneered the field of mathematically rigorous universal AI and optimal universal problem solvers. He also generalized the many-worlds theory of physics to a theory of all constructively computable universes - an algorithmic theory of everything. His formal theory of creativity & curiosity & fun (1990-2010) explains art, science, music, and humor.

 

For more information e-mail:

comp.neuro.info@qti.qualcomm.com

 

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu
Institute of Engineering in Medicine: http://iem.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Sridevi V. Sarma: "Performance Limitations of Thalamic Relay: Insights into Thalamo‐Cortical Processing, Parkinson's Disease and Deep Brain Stimulation" (10/01/2012)

http://inc.ucsd.edu http://www.qualcomm.com

Affiliation:
Assistant Professor, Institute for Computational Medicine
Department of Biomedical Engineering, Johns Hopkins University
http://sarmalab.icm.jhu.edu/

Date: Monday, October 1st, 2012

Time: 4:00 PM - 5:00 PM

Location: Fung Auditorium, Powell-Focht Bioengineering Building, UC San Diego (Map)

 

Title: "Performance Limitations of Thalamic Relay: Insights into Thalamo‐Cortical Processing, Parkinson's Disease and Deep Brain Stimulation"

Abstract: Thalamic networks in the brain are responsible for strategically filtering sensory information subject to attentional demands. For example, one can gaze at a butterfly and completely be unaware of the flowers and bushes that surround it, even though these surroundings are entirely within the subject's visual field. This occurs because visual thalamic neurons only relay the information in the visual field that the subject is paying attention to back to visual cortex for perception. How and when this relay occurs has never been precisely quantified.

In this talk, we utilize a biophysical‐based model to quantify relay of a thalamic cell as a function of its input parameters and electrophysiological properties. Specifically, we compute bounds on relay reliability and show how these bounds can explain experimentally observed patterns of neural activity in the basal ganglia in (i) health where reliability is high, (ii) in Parkinson's disease (PD) where reliability is low, and (iii) in PD during therapeutic deep brain stimulation where reliability is restored. Our bounds also predict different rhythms that emerge in the lateral geniculate nucleus in the thalamus during different attentional states of a cat.

 

If you would like to meet with the speaker, contact Todd Coleman, tpcoleman@ucsd.edu.

 

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu
Institute of Engineering in Medicine: http://iem.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Peter Redgrave: "Dopamine made me do it, but what did I learn?" (06/13/2012)

http://inc.ucsd.edu http://www.braincorporation.com http://www.qualcomm.com

Affiliation:
Professor of Neuroscience, Dept. Psychology, University of Sheffield, Sheffield, U.K

Date: Wednesday, June 13th, 2012

Coffee reception: 9:30-10:00 a.m.

Time: 10:00-11:30 a.m.

Location:
Irwin M. Jacobs Qualcomm Hall
5775 Morehouse Drive San Diego, CA 92121

 

Title: "Dopamine made me do it, but what did I learn?"

Abstract: There is general agreement that the basal ganglia play an important role in behavioural selection and reinforcement learning. It is also agreed that within the basal ganglia, the phasic response of midbrain dopaminergic neurones to biologically salient stimuli acts as a reinforcement signal. However, from this point there is less agreement. The majority view is that the dopamine neurones signal reward prediction errors that are used to reinforce the maximisation of future reward acquisition. Au contraire, I will propose that reinforcement learning can be split into independent processes that have been recognised by evolution in the basal ganglia‚s functional architecture: (i)an intrinsic dopamine-reinforced mechanism responsible for the discovery of agency and the development of novel actions; and (ii)a separate mechanism that modulates competing inputs to the basal ganglia so future selections are biased in favour of high value outcomes.

For more information e-mail comp.neuro.info@qualcomm.com

 

Host: Terry Sejnowski

 

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu
Institute of Engineering in Medicine: http://iem.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Sebastian Seung: Mapping the retinal connectome with EyeWire, an online community for 'citizen neuroscience (04/11/2012)

http://inc.ucsd.edu http://www.braincorporation.com http://www.qualcomm.com

Affiliation:
Howard Hughes Medical Institute, MIT

Date: Wednesday, April 11, 2012

Time: 2:00 PM

Location:
University of California, San Diego
San Diego Supercomputer Center (Auditorium- Room B211)
10100 John Jay Hopkins Drive
San Diego, CA 92093-0523

 

Title:"Mapping the retinal connectome with EyeWire, an online community for 'citizen neuroscience"

Abstract: According to a doctrine known as connectionism, brain function and dysfunction depend primarily on patterns of connectivity between neurons. Connectionism has been explored theoretically with mathematical models of neural networks since the 1940s. It has proved difficult to test these models through activity measurements alone. For conclusive empirical tests, information about neural connectivity is also necessary, and could be provided by new imaging methods based on serial electron microscopy. The bottleneck in using these new methods is now shifting to the data analysis problem of extracting neural connectivity from the images. Our capacity to acquire "big data" from the brain has far outpaced our ability to analyze it. My lab has been developing computational technologies to deal with this data deluge. Based on these innovations, we have recently launched EyeWire, an online community that mobilizes the public to map the retinal connectome by interacting with one another and with artificial intelligence based on machine learning. I will describe preliminary efforts to map the retinal circuits presynaptic to JAM-B and orientation-selective types of

If you would like to meet with the speaker, contact Todd Coleman, tpcoleman@ucsd.edu

 

Host: Terry Sejnowski

 

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu
Institute of Engineering in Medicine: http://iem.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Prashant Mehta: Bayesian Inference with Oscillator Models: A Possible Role of Neural Rhythms (03/05/2012)

http://inc.ucsd.edu http://www.braincorporation.com http://www.qualcomm.com

Affiliation:
Associate Professor
Dept. of Mechanical Science & Engineering
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
http://mechse.illinois.edu/research/mehta/

Date: Monday, March 05, 2012

Time: 4:00 PM - 5:00 PM

Location: Fung Auditorium, Powell-Focht Bioengineering Building, UC San Diego (Map)

 

Title: Bayesian Inference with Oscillator Models: A Possible Role of Neural Rhythms

Abstract: Prediction is believed to be a fundamentally important computational function for any intelligent system. Bayesian inference in probability theory is a well-known mechanism to implement prediction. This has led to historical and recent interest in Bayesian inference for biological sensory systems: The Bayesian model of sensory (e.g., visual) signal processing suggests that the cortical networks in the brain encode a probabilistic 'belief' about reality. The belief state is updated based on comparison between the novel stimuli (from senses) and the internal prediction. A natural question to ask then is whether there is a rigorous methodology to implement complex forms of prediction via Bayes rule at the level of neurophysiologically plausible spiking elements? In this talk, I will provide a qualified answer to this question via coupled oscillator models. A single oscillator is a simplified model of a single spiking neuron. The coupled oscillator model solves an inference problem: The population encodes a belief state that is continuously updated (in a Bayes optimal fashion) based on noisy measurements. The methodology is described with the aid of a model problem involving estimation of a `walking gait cycle' using noisy measurements. This is joint work with several students and collaborators at the University of Illinois.

 

Bio: Prashant Mehta is an Associate Professor in the Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign. He received his Ph.D. in Applied Mathematics from Cornell University in 2004. Prior to joining Illinois, he was a Research Engineer at the United Technologies Research Center (UTRC). His research interests are at the intersection of dynamical systems and control theory, including mean- field games, model reduction, and nonlinear control. He has received several awards including an Outstanding Achievement Award for his research contributions at UTRC, several Best Paper awards together with his students at Illinois, and numerous teaching and advising honors at Illinois.

 

Host: Todd Coleman, tpcoleman@ucsd.edu

 

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu
Institute of Engineering in Medicine: http://iem.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Ryan T. Canolty: Cross-level coupling between single neurons and large-scale LFP patterns in multi-scale brain networks (11/28/2011)

http://inc.ucsd.edu http://www.braincorporation.com http://www.qualcomm.com

Affiliation:
Helen Wills Neuroscience Institute &
Department Of Electrical Engineering and Computer Sciences
University Of Califprnia, Berkeley
http://knightlab.berkeley.edu/profile/rcanolty/

Date: Monday, November 28, 2011

Time: 4:00 PM - 5:00 PM

Location: Fung Auditorium, Powell-Focht Bioengineering Building, UC San Diego (Map)

 

Title: Cross-level coupling between single neurons and large-scale LFP patterns in multi-scale brain networks

 

Abstract: Brains exhibit structure across a variety of different scales – from single neurons (micro-scale) to functional areas (meso-scale) to large-scale cortical networks (macro-scale). Furthermore, the different levels of multi-scale brain networks often interact with each other – that is, activity and information at one level can influence other levels, a phenomenon termed cross-level coupling (CLC). Neuronal oscillations have been suggested as a possible mechanism for dynamic cross-level coordination, but the functional role of oscillations in multi-scale networks remains unclear. We investigated CLC by recording local field potentials (LFPs) and single unit activity using multiple microelectrode arrays in several brain areas of the macaque, and then modeled the dependence of spike timing on the full pattern of proximal and distal LFP activity. We show that spiking activity in single neurons and neuronal ensembles depends on dynamic patterns of oscillatory phase coupling between multiple brain areas, in addition to the effects of proximal LFP phase and amplitude. Neurons that prefer similar patterns of LFP phase coupling exhibit similar changes in spike rates, potentially providing a basic mechanism to bind different neurons together into coordinated cell assemblies. Surprisingly, CLC-based spike rate correlations are independent of inter-neuron distance – that is, two neurons in opposite hemispheres may prefer the same global LFP pattern and exhibit correlated rate changes, while two neurons recorded on the same electrode may prefer different global LFP patterns and exhibit uncorrelated spiking activity. CLC patterns correlate with behavior and neural function, remain stable over multiple days, and show reversible, task-dependent shifts when engaging in multiple tasks. These findings suggest that neuronal oscillations enable selective and dynamic control of distributed functional cell assemblies, supporting the hypothesis that CLC may play a key role in the functional reorganization of dynamic brain networks.

 

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu
Institute of Engineering in Medicine: http://iem.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Thorpe, Simon: Neocortical Dark Matter, Grandmother Cells and the puzzle of extremely long-term memories (11/05/2010)

http://inc.ucsd.edu http://www.braincorporation.com http://www.qualcomm.com

Affiliation:
CNRS Research Director,
Brain and Cognition Research Center (CERCO), Toulouse, France

Date: Friday, November 05, 2010

Time: 3:30 PM - 7:00 PM (PT)

Location: (Directions)
San Diego Supercomputer Center
10100 John J Hopkins Dr
San Diego, CA 92037

Schedule:
3:30pm - Refreshments & reception / 5:00 - 7:00pm - Lecture

 

Title: "Neocortical Dark Matter, Grandmother Cells and the puzzle of extremely long-term memories."

Abstract: Humans can recognize images and sounds that they have not seen or heard for decades. How is this possible, given that the molecules from which the brain is made have presumably all been replaced many times over? Presumably, very long-term memories are stored in the patterns of synaptic connectivity, but most models of associative memory based on distributed representations would have difficulty in maintaining memories intact for so long because patterns would tend to be overwritten by incoming stimuli. Here I would like to propose the idea that such long term memories could depend on the highly selective cortical neurons that essentially never fire, allowing them to remain selective over very long periods of time. I will discuss a range of theoretical, stimulation and experimental data that support this proposal that a substantial proportion of neocortical neurons could in reality constitute dark matter - effectively invisible to conventional neuropsychological techniques.

 

Organized by:
Institute for Neural Computation: http://inc.ucsd.edu
Institute of Engineering in Medicine: http://iem.ucsd.edu


Sponsored by:
Qualcomm: http://www.qualcomm.com
Brain Corporation: http://www.braincorporation.com


Resources:


MB

Faculty Spotlight

Tzyy-Ping Jung
Elevated to IEEE Fellow for contributions to blind source separation for biomedical applications.

...more info


Staff Spotlight

Lily Marapao, Human Resource Manager, is retiring from UCSD after 31 years

...see more