Search    
Find    

Faculty Spotlight

(12/02/2014) Tzyy-Ping Jung elevated to IEEE Fellow for contributions to blind source separation for biomedical applications

Recognizing the achievements of its members is an important part of the mission of the IEEE. Each year, following a rigorous evaluation procedure, the IEEE Fellow Committee recommends a select group of recipients for elevation to IEEE Fellow. Less than 0.1% of voting members are selected annually for this member grade elevation.

It is my great pleasure to inform you that the IEEE Board of Directors, at its November 2014 meeting, elevated you to IEEE Fellow, effective 1 January 2015, with the following citation:

(for contributions to blind source separation for biomedical applications)

 

Sincerely,

 

J. Roberto B. de Marca, FIEEE
IEEE President and CEO

(01/27/2014) San Diego Union Tribune interviews Marian Bartlett, co-director of INC's MPLab.

Innovator Talks About Getting University Work Into Business

By U-T San Diego Jan. 27, 2014


TTO: Those letters often strike fear into the hearts of scientists and venture capitalists. They stand for Tech Transfer Office, the place you have to negotiate if you want to commercialize technology developed at a university or research institute.

San Diego's future success in the innovation economy depends in part on mining these new technologies. So let's meet Marian Bartlett, co-founder and lead researcher for Emotient, winner of Connect's 2013 Most Innovative New Products Award in the software category for FACET, which translates facial expressions into actionable information, enabling companies to create new levels of customer engagement.

Q: Did you start out to be a scientist?

A: I had some preconceived notions that women and girls didn't like math. Then in college I realized that I was good at it and became a math major. After college, I wanted to use my math skills with something human oriented so I contacted everyone in Boston in the visual perception field, and I was hired as a research assistant at MIT. Then I came to UCSD, where I earned a Ph.D. in cognitive science and psychology in 1998. (Her thesis became the basis of Emotient.)

 

Q: When did you first learn about business?

A: At UCSD, I was fortunate that one of my professors was Robert Hecht-Nielsen, the co-founder of HNC Software (HNC developed software used by the credit card and insurance industries to detect fraud and was purchased by Fair, Isaac and Co. in 2002.). Robert bridged academia and the business world, and he taught that in his classes. He had us write SBIR (Small Business Innovation Research) proposals as part of our class on neural networks. Also, my graduate adviser, Terry Sejnowksi, had previously started a business, Softmax, with some of his former postdocs that was later purchased by Qualcomm. So I was able to see firsthand how novel research can transform into a successful business.

 

Q: Were other women in the program?

A: In experimental psychology just under half (of the) students were women. When I moved over to the neural network machine-learning lab at Salk, I was the only one. It was aggressive, exciting and motivating.

 

Q: What was it like being one of a few women?

A: On rare occasions I thought that some people perceived my male graduate peers as being smarter or more capable, but I also benefited from being one of the few women in machine learning, so people remembered me.

 

Q: When did you start a company?

A: In 2008 with four colleagues, we started Machine Perception Technologies, which released a toolbox for the academic community called CERT — computer expression recognition toolbox. Silicon Valley venture capitalist Seth Neiman, a senior partner at Crosspoint Venture Partners, tried the demo on the website. In 2012, he became our lead investor, and we changed the company name to Emotient.

 

Q: Did you leave your research professor position at UCSD?

A: I wanted to remain part time at UCSD because I enjoy research, and I had commitments to students, I was the principal investigator on $3 million to $4 million in grants at the time and had $10 million total since 2001. UCSD's policies regarding intellectual property are interpreted very broadly and contain language often called the umbilical clause. If you are a faculty member, even if you do research on your own time and in separate facilities, UCSD will claim ownership. The result is to force out people who develop ideas. I was required by the investors to take a full leave or there would have been no company.

(Neil's note: I think it is sad that Bartlett had to leave UCSD. The whole issue of how a university tries to monetize intellectual property will be a topic for another column.)

 

Q: What's different between an academic setting and a startup?

A: At UCSD, the primary objective was to generate research papers that had the highest impact on our field. A complex research paper doesn't generate revenue. Pitching to venture capitalists and actually selling software were new skills for our team. We needed to blend our science skills with the business skills of Seth Neiman and our CEO, Ken Denman.

 

Q: What advice would you give to other academics who want to commercialize their technology?

A: Team up with the right partners with deep experience in the business world unless you want to spend a lot of time writing SBIR proposals.

 

We love Bartlett's story. Clearly she is a determined individual passionate about her work, and she understands the importance of knowing what you don't know. We hope that universities and research institutes will revisit and revise their intellectual property policies so that inventive scientists like Bartlett can maintain a foothold in both academia and business.

Neil Senturia and Barbara Bry, serial entrepreneurs who invest in early-stage technology companies, take turns in writing this weekly column about entrepreneurship in San Diego. Please email ideas to Barbara at bbry@blackbirdv.com

© Copyright 2014 The San Diego Union-Tribune, LLC. An MLIM LLC Company. All rights reserved.

 

 

(09/17/2013) INC Co-Director, Gert Cauwenberghs, selected by NSF to take part in multi-institutional, $10 million research project

Bioengineers Researching Smart Cameras and Sensors that Mimic, Exceed Human Capability

Sep 17th, 2013

By Catherine Hockmuth,

See actual article here...

 

University of California, San Diego bioengineering professor Gert Cauwenberghs has been selected by the National Science Foundation to take part in a five-year, multi-institutional, $10 million research project to develop a computer vision system that will approach or exceed the capabilities and efficiencies of human vision. The Visual Cortex on Silicon project, funded through NSF's Expeditions in Computing program, aims to create computers that not only record images but also understand visual content and situational context in the way humans do, at up to a thousand times the efficiency of current technologies, according to an NSF announcement.

Smart machine vision systems that understand and interact with their environments could have a profound impact on society, including aids for visually impaired persons, driver assistance capabilities for reducing automotive accidents, and augmented reality systems for enhanced shopping, travel, and safety.

For their part in the effort, Cauwenberghs, a professor in the Department of Bioengineering at the UC San Diego Jacobs School of Engineering, and his team are developing computer chips that emulate how the brain processes visual information. "The brain is the gold standard for computing," said Cauwenberghs, adding that computers work completely differently than the brain, acting as passive processors of information and problems using sequential logic. The human brain, by comparison, processes information by sorting through complex input from the world and extracting knowledge without direction.

While several computer vision systems today can each successfully perform one or a few human tasks-such as detecting human faces in point-and-shoot cameras-they are still limited in their ability to perform a wide range of visual tasks, to operate in complex, cluttered environments, and to provide reasoning for their decisions. In contrast, the visual cortex in mammals excels in a broad variety of goal-oriented cognitive tasks, and is at least three orders of magnitude more energy efficient than customized state-of-the-art machine vision systems.

ISNL
 

Cauwenberghs said the Visual Cortex on Silicon project offers a unique collaborative opportunity with experts across the globe in neuroscience, computer science, nanoengineering and physics.

The project has other far-reaching implications for neuroscience research. By developing chips that can function more like the human brain, Cauwenberghs believes researchers can achieve a number of significant breakthroughs in our understanding of brain function from the work of single neurons all the way up to a more holistic view of the brain as a system. For example, building chips that model different aspects of brain function, such as how the brain processes visual information, gives researchers a more robust tool to understand where problems arise that contribute to disease or neurological disorders.

The Expeditions in Computing program, which started in 2008, represents NSF's largest single investments in computer science research. As of today, 16 awards have been made through this program, addressing subjects ranging from foundational research in computing hardware, software and verification to research in sustainable energy, health information technology, robotics, mobile computing, and Big Data.

(04/24/2013) Salk scientist Terrence Sejnowski elected to American Academy of Arts and Sciences

See original source here: http://www.salk.edu/news/pressrelease_details.php?press_id=611

 

LA JOLLA, CA—Salk researcher Terrence J. Sejnowski, professor and head of the Computational Neurobiology Laboratory, has been elected a Fellow of the American Academy of Arts and Sciences, a distinction awarded annually to global leaders in business, government, public affairs, the arts and popular culture as well as biomedical research.

Sejnowski is world renowned as a pioneer in the field of computational neuroscience and his work on neural networks helped spark the neural networks revolution in computing in the 1980s. His research has made important contributions to artificial and real neural network algorithms and applying signal processing models to neuroscience.

One of the key architects of the White House's new BRAIN Initiative, Brain Research through Advancing Innovative Neurotechnologies, Sejnowski recently attended President Obama's announcement of the bold new initiative. A 10-year research effort that will enlist the country's top neuroscientists to map activity in the human brain, the goal is to invent and refine new technologies to understand the human brain in an effort to find better ways to treat such conditions as Alzheimer's, autism, stroke and traumatic brain injuries.

"Terry is a remarkable scientist whose groundbreaking work has bridged computer science and neuroscience," says Salk President William R. Brody. "Not only has his research initiated significant advances in neuroscience, it has inspired the research of generations of scientists. We congratulate Terry and commend the American Academy for honoring him with this award."

Sejnowski is the 12th scientist from Salk to be inducted into the Academy and will share the honor with 198 new members of the 2013 class that include Nobel Prize winner Bruce A. Beutler, philanthropist David M. Rubenstein, astronaut John Glenn, actor Robert De Niro and singer-songwriter Bruce Springsteen.

The Academy selected Sejnowski and the other new Fellows as a result of their preeminent contributions to their disciplines and society at large. The honorees will be formally inducted into the Academy on October 12, 2013 at its headquarters in Cambridge, Massachusetts.

"Election to the Academy honors individual accomplishment and calls upon members to serve the public good," said Academy President Leslie C. Berlowitz. "We look forward to drawing on the knowledge and expertise of these distinguished men and women to advance solutions to the pressing policy challenges of the day."

One of the nation's most prestigious honorary societies, the Academy is also a leading center for independent policy research. Members contribute to Academy publications and studies of science and technology policy, energy and global security, social policy and American institutions, and the humanities, arts, and education.

Since its founding in 1780, the Academy has elected leading "thinkers and doers" from each generation, including George Washington and Benjamin Franklin in the eighteenth century, Daniel Webster and Ralph Waldo Emerson in the nineteenth, and Albert Einstein and Winston Churchill in the twentieth. The current membership includes more than 250 Nobel laureates and more than 60 Pulitzer Prize winners.

 

About the Salk Institute for Biological Studies:
The Salk Institute for Biological Studies is one of the world's preeminent basic research institutions, where internationally renowned faculty probe fundamental life science questions in a unique, collaborative, and creative environment. Focused both on discovery and on mentoring future generations of researchers, Salk scientists make groundbreaking contributions to our understanding of cancer, aging, Alzheimer's, diabetes and infectious diseases by studying neuroscience, genetics, cell and plant biology, and related disciplines.

Faculty achievements have been recognized with numerous honors, including Nobel Prizes and memberships in the National Academy of Sciences. Founded in 1960 by polio vaccine pioneer Jonas Salk, M.D., the Institute is an independent nonprofit organization and architectural landmark.

 

(03/10/2013) Howard Poizner, Untangling Parkinson's disease through virtual reality

 

Newsletter editor Tomoki Tsuchida sat down with Dr. Howard Poizner, Professor Emeritus of Rutgers University and director of the Poizner laboratory at UCSD. We had a chance to talk about his virtual reality laboratory and his diverse research interests across many disciplines.

 

Can you tell us a bit about the path that brought you here to UCSD?

 

There was a lot of interest in PD (Parkinson's disease) within the Center. There was research going at the molecular level, the systems level and the behavioral level, and thus the Center provided an excellent multidisciplinary environment to study PD. I had been studying human motor disorders caused by stroke or PD. Over time, my interest in PD gradually deepened, and became a very strong focus in my laboratory.

After fifteen years at Rutgers, I decided that I wanted to return to San Diego. I had been at the Salk Institute for 12 years before moving to Rutgers, and still felt San Diego to be home. So I decided to take early retirement at age 55 (I'm now Professor Emeritus) and to move back to San Diego. I called Terry Sejnowski who I knew from my days at the Salk Institute and asked if either Salk or UCSD would have interest in a motor neuroscience lab. I had an NIH grant on PD and would continue that research. He said he directed an Institute at UCSD, INC, and told me to come on over.

 

You direct a laboratory with one of the most sophisticated virtual reality environments in the world. Can you describe the facility and how that relates to your main research goals?

The role of that facility is two-fold: it's both my lab and serves as a core facility that I direct for TDLC (NSF Temporal Dynamics of Learning Center, Gary Cottrell, PI). We have what as far as I know is a unique facility in the world capable of simultaneous recording of full-body motion and EEG while subjects freely move about in large-scale immersive virtual environments. The environments are highly immersive, and allow us to address questions of how the brain acts when people actually move, as opposed to when someone is stationary with their head fixed in place as is typically done.

Creation of virtual environments is crucial for experimental control since they provide powerful experimental control. The timing of events and the feedback given to the subject is completely controlled; the repeatability is exact, the measurements are very precise; and all of the data streams are synchronized through custom scripts that we've written. So we can record people's brain activity concurrently with their head, body, and limb motions as they move through locations, grasp virtual objects that have different weights and textures, learn to adapt to perturbations in the environment, make decisions and so forth. Thus, we can simultaneously study such things as the neural mapping of space in humans, learning and memory, and the cortical dynamics underlying motor control. I feel that these technological developments open up entirely new possibilities for investigating the cortical substrates of cognition and of motor control. We've recently published a detailed description of our system that goes through the various system components, their spatial and temporal precisions, how all of the devices are integrated, and give some sample applications (Snider, J. et al., in press).

 

What are some of the projects you're working on currently?

 


 

In one set of projects, we want to understand how the brain acts in the high dimensional world, that is, how it governs our actions in environments in which the brain actually evolved to act in. This issue has been somewhat neglected in neuroscience, yet is critical to understanding how humans deal with complex, novel problems. We're fortunate to have an Office of Naval Research MURI (Multidisciplinary University Research Initiative) center grant to study this issue.

The goal of the grant is to better understand the brain bases of a type of learning known as unsupervised learning. Unsupervised learning hasn't been studied nearly as intensively as has reinforcement learning or supervised learning, such as classroom-type learning. In unsupervised learning, you learn as you go about interacting with the world, without being explicitly taught, or reinforced. It's commonplace in complex, novel environments, and allows one to be able to generalize and act flexibly in novel situations.

We have a vertical platform of studies underway in the MURI grant. At the neurobiological level, Gary Lynch at UC Irvine is conducting cellular studies in rats. He has rats explore a new environment with objects located in various locations in the space. He then brings the rats back the next day and sees how the animals re-explore the environment after he has switched around the locations of some of the objects from the previous day. Rats spend more time exploring what has changed in the environment, showing that they had remembered the environment from the previous day. Gary then examines their brains and can map the synapses that have changed in the hippocampus from that one unsupervised learning experience. In essence, he is providing a picture of a memory engram. Very remarkable work.

In my lab, we conduct parallel experiments to Gary's, but in humans using our virtual realitybrain recording system. We have subjects freely explore a virtual room that has a variety of objects scattered throughout the space. The virtual room is the same size as the lab they are in, so subjects don't run out of physical space. We don't instruct the subjects to learn or remember anything, but just have them explore the space. And, just like the rats, we bring them back the next day, but unbeknownst to them, we have altered the locations of a subset of those objects from what was seen the day before. Thus, for the first time, we are able to look at what happens to brain dynamics when subjects are freely exploring and learning a spatial environment in an unsupervised fashion. We found that there is a relationship between the theta rhythm (neural oscillations at about 3 to 7 hertz) recorded over midline posterior parietal cortices, and the locations in space that the subject walked through providing a neural map of space. Moreover, the degree of structure in these maps produced when subjects explored the space on the first day predicted their memory performance when they were brought back in the environment on the second day. We are very excited about these findings as they are the first report of memory-related neural maps of space in humans during active spatial exploration. Joe Snider, a project scientist my lab, is the first author of a paper currently under review on these findings. Eric Halgren from the Department of Neurosciences is an important collaborator on the project as well.

 

At another level in the platform of studies of the MURI is a project directed by Tom Liu, the director of UCSD's fMRI center. Among other things, Tom's group has been working on understanding resting brain state activity. When you're at rest, your brain is not silent, but there are lots of brain networks that are active.

 

There are indications that the nature of the activations in these resting state networks can predict certain kinds of learning and memory performance, although unsupervised learning and memory has not been studied.

To address this issue, we brought back the subjects that had participated in our spatial exploration and learning experiment to undergo resting state fMRI brain scans in Dr. Liu's lab. In resting state studies, subjects are just quietly resting in the scanner with their eyes open. We wanted to see whether individual characteristics in brain activity at rest predisposed individuals to have differing memory performance in the unsupervised learning and memory experiment I just described. This still is an ongoing study, but we are finding a strong relation between patterns of activation in individuals and their memory performance that was measured many months earlier in the spatial learning experiment. Activations in dorsal striatal areas turned out to be particularly predictive of the memory performance.

Interestingly, striatal areas are known to be very important in motor learning and recently have been shown to predict performance in certain video games by Art Kramer at University of Illinois. Tom's group now is looking at connectivity measures in the same dataset, that is, how strongly are two or more regions functionally connected. How dorsal striatal regions are functionally connected to other brain regions in the resting state may give us additional clues to which brain areas may be mediating the individual differences in memory performances.

At still another level, Ralph Greenspan at the Kavli Institute for Brain and Mind is doing genetic studies in flies to get at basic neurobiological mechanisms of attention and brain oscillations. In fruit flies, one can perform rapid genetic manipulations that allow you to tease apart some candidate genes that could be important for learning and attention. Once he has identified these, the genes can then serve as a springboard to take a look at the mammalian species, even humans. Those are some of the layers within the MURI center grant.

We also have a NIH grant on Parkinson's disease. We're still very much engaged in that endeavor, and the virtual realitybrain recording system is becoming key to our being able to do the type of experiments that will allow us to really understand how the disease acts. One major question involves the roles of the circuits between basal ganglia and cortex that we know are dysfunctional in PD. We are investigating their roles in motor control and learning, and how different therapeutic modalities alter the functioning of these circuits. We are not only interested in learning about PD and its therapies, but also in understanding the neural control of movement. PD and its therapies provide a natural occurring window into these affected circuits and using standard PD therapies, we can reversibly alter the functioning of these circuits to probe the system.

 

 

 

We focus primarily in reaching and grasping
motions — naturalistic behaviors that we all do.
With virtual reality, we can have you reach and
grasp for virtual objects and provide visual
feedback at very specific points in time; we can
alter that feedback from what you normally would
get; we can perturb objects that you're trying to
grasp at particular points during your reach for the
object. We also use our haptic robots to give
people a sense of actually feeling the virtual
object. So, for example, if you are grasping a
virtual wooden block, you would actually feel the
block.

 

One sense of modality that we don't usually think about a lot is proprioception. That is, if we close our eyes, we know where our hands and arms are in space and how they are moving. Sensors in the joints and muscles provide the relevant input to the brain. Proprioception is critical to motor control and seems to be impaired in PD patients. So, one thing we have been studying is the nature of proprioception in PD patients, and how therapies such as deep brain stimulation to the subthalamic nucleus within the basal ganglia alters proprioception. We can reversibly turn the stimulator on and off to alter the functioning of basal ganglia-cortical circuits and see how that affects a particular function, such as proprioception or reaching and grasping. We can do the same thing with dopamine medications.

Summarizing a variety of experiments, we're finding that, in certain situations, PD patients do show pronounced proprioceptive deficits. We're in the process of completing the data analysis, but it seems that deep brain stimulation does not provide a major reversal of these deficits, although it may in fact reduce the variability or uncertainty of within that sense modality.

With respect to reaching and grasping, we've hypothesized that PD patients show at least two different aspects of movement deficits. One relates to what I'll call intensive aspects of movement — peak speed or peak amplitude. Another is a more coordinative aspect of movement, including joint coordination. We've further hypothesized that dopamine replacement does not act unidimensionally across these deficits. It is quite good at improving the intensive or scaling aspects of movement, but is not very good at reversing coordinative deficits. In one experiment, we had PD and control subjects reach for, grasp, and lift a virtual object that could be positioned in different orientations with respect to simulated gravity. The objects also had different weights. We found that dopamine replacement therapy significantly increased the speed with which they PD patients reached to grasp these objects, but it did not increase the ability of the patients to coordinate the hand and the arm required to lift the object along its oriented axis. When patients lifted against the gravity and had to maintain the object's orientation, PD patients on or off dopamine therapy were very much impaired, even though their speeds during the reach improved with therapy.

In a subsequent experiment, we are looking at how PD patients on and off dopamine medications respond to a visual perturbation of an object during their reach. For example, subjects may be reaching to grasp a virtual rectangular block oriented lengthwise, when it suddenly rotates 90 degrees part way through the reach. How do patients adapt to this alternation? Is the adaptation of their movements smooth, or do patients have to start over and reprogram the movement entirely? And how does providing vision affect their ability to adapt? In this experiment, we've also recorded EEG concurrently with hand, arm and eye movements. In collaboration with Claudia Lainscsek and Terry Sejnowski at Salk, and Manuel Hernandez and others in my lab, we're using new signal processing techniques developed by Claudia and Terry to analyze the nonlinear dynamics inherent in the EEG time series. We're excited about these new methods, as they promise a fresh approach to the understanding alterations in brain function during complex behavioral tasks in PD.

So the virtual reality environment can really make it easy to tease apart different aspects of Parkinson's disease.

Yes, it really allows for flexible experimental control. In one experiment, we've recorded eye movements, reaching movements, and EEG while healthy individuals reach to or look at spatially directed targets. Since the temporal dynamics of EEG has not been systematically investigated with respect to movement, we have measured movements that are directed to different spatial locations, either by the eye or the hand or by both effectors moving together. Steve Hillyard in the Department of Neurosciences has been working with us closely on this project, and with Markus Plank a postdoc from the lab we've examined various spatiotemporal EEG characteristics during motor planning of the eye or the hand to a spatially directed target.

We've uncovered some fascinating motor-attention related components of the event-related potentials (ERPs) during motor planning. For example, the amplitude of an attention-related potential recorded over right parietal cortices was strongly modulated during the planning interval according to which upcoming movement was being planned. Moreover, the amplitude of this component, and indeed of even earlier visually-related components, significantly predicted the accuracy of the movement that was about to happen in the near future. What may be even more interesting, however, is that we're uncovering what seems to be novel motor planning related ERP components. One would never have seen these components using traditional EEG paradigms in which limb and eye movements were avoided.

Working closely with Todd Coleman in the Department of Bioengineering and Cheolsoo Park a postdoc who worked jointly in our labs (see Fall 2011 issue of Incubator), we've been attempting to classify the EEG during the motor planning interval of this experiment, to see how well spectral features of the EEG could predict which of the various movements was being planned. Cheolsoo brought to bear a relatively newly developed signal processing technique that allowed him to decompose the EEG into a set of independent frequency modes. Importantly, this technique is data-driven, that is, the different frequency modes uncovered are intrinsic to the data, rather than being the predefined frequency bands that we most commonly use. Cheolsoo found that one of these frequency modes, which mostly overlapped with the traditional gamma band, was able to significantly classify the EEG during the planning interval in terms of which of the various movements would be forthcoming. He also found that this technique produced higher classification rates than the standard signal processing methods. So, these analyses both inform us about the information content in the EEG relevant to planning spatially directed hand and eye movements, and demonstrates the feasibility of a new mode of signal processing for such analyses.

We have a number of other studies underway, but this gives you a sense of our research directions.

It is amazing how broad a research effort you're leading, with collaboration from both within and outside of the INC.

The collaboration is very important. The colleagues here at UCSD are tremendous, and the depth and breath of disciplines represented is large and essential to our research.

 

 

(07/05/2012) Terry Sejnowski, named 2013 IEEE Frank Rosenblatt Award.

Terry Sejnowski, named recipient of the 2013 IEEE Frank Rosenblatt Award.

 

(011/11/2011) Todd P. Coleman, Associate Professor, Bioengineering, UCSD

"Todd P. Coleman is an Associate Professor in the Department of Bioengineering with affiliations in the Information Theory & Applications Center, the Institute of Engineering in Medicine, and the Institute for Neural Computation at UCSD. He directs the Neural Interaction Laboratory at UCSD where his group conducts research on flexible "tattoo electronics" for neurological monitoring, quantitative approaches to understand interacting neural signals within brains, and team decision theory approaches to design brain-computer interfaces. His research is highly interdisciplinary, at the intersection of neurophysiology, bio-electronics, and applied probability."

 

(02/10/2011) Terrence Sejnowski, elected member of the National Academy of Engineering.

La Jolla, CA - INC Co-Director and Salk Institute professor Terrence J. Sejnowski, Ph.D., has been elected to the National Academy of Engineering. This places him in a remarkably elite group of only ten living scientists to have been elected to the National Academy of Sciences, Institute of Medicine as well as the National Academy of Engineering. UCSD and INC congratulate Dr. Sejnowski on this prestigious appontment and exceptional achievement.

 

(04/27/2010) Terrence Sejnowski, elected member of the National Academy of Science.

La Jolla, CA - Salk Institute professor Terrence J. Sejnowski, Ph.D., whose work on neural networks helped spark the neural networks revolution in computing in the 1980s, has been elected a member of the National Academy of Sciences. The Academy made the announcement today during its 147th annual meeting in Washington, DC. Election to the Academy recognizes distinguished and continuing achievements in original research, and is considered one of the highest honors accorded a U.S. scientist.

 

See full article...