(07/11/16) Students learn physics through musical sound (CBS News)
Drs. Minces and Khalil create Listening to Waves, a program to learn science by making and analyzing musical instruments.
The study, published in the journal eLife, also sheds light on how the brain can manage massive amounts of data with very little energy, and avoids conceptual traps that can stymie machine learning algorithms, said Bartol, the study's first author. Noted brain researcher Terry Sejnowski was senior author. The study can be found at j.mp/synapti.
Apple's purchase of Emotient fuels artificial intelligence boom in Silicon Valley
The arms race in Silicon Valley is on for artificial intelligence.
Facebook is working on a virtual personal assistant that can read people's faces and decide whether or not to let them in your home. Google is investing in the technology to power self-driving cars, identify people on its photo service and build a better messaging app.
Now Apple is adding to its artificial intelligence arsenal. The iPhone maker purchased Emotient, a San Diego maker of facial expression recognition software that can detect emotions to assist advertisers, retailers, doctors and many other professions.
(Drs. Scott Makeig and Tzyy-Ping Jung (SCCN) are co-authors of the study, as is TDLC's Tim Mullen and INC Co-Director Gert Cauwenberghs.)
Bioengineers and cognitive scientists have developed the first portable, 64-channel wearable brain activity monitoring system that's comparable to state-of-the-art equipment found in research laboratories.
The system is a better fit for real-world applications because it is equipped with dry EEG sensors that are easier to apply than wet sensors, while still providing high-density brain activity data. The system comprises a 64-channel dry-electrode wearable EEG headset and a sophisticated software suite for data interpretation and analysis. It has a wide range of applications, from research, to neuro-feedback, to clinical diagnostics.
The researchers' goal is to get EEG out of the laboratory setting, where it is currently confined by wet EEG methods. In the future, scientists envision a world where neuroimaging systems work with mobile sensors and smart phones to track brain states throughout the day and augment the brain's capabilities.
An iPhone that can 'feel' your pain? Apple's latest acquisition could make it happen.
The company has reportedly purchased AI startup Emotient.
Apple has reportedly acquired artificial-intelligence startup Emotient, giving it access to technology that could one day imbue its devices with the ability to "read" people's emotions through their facial expressions.
Emotient's emotion-recognition technology derives from the Machine Perception Lab at the University of California at San Diego and has focused primarily on helping advertisers understand viewer reactions to their ads.
Marian Bartlett and Javier Movellan speak about commercializing facial expression recognition technology fostered by (TDLC / INC) at UC San Diego. Credit: UCSD Technology Transfer Office.
Scientists at the Salk Institute are gaining new understanding of how the human brain truly works. Researchers now say cells called astrocytes play a major role in certain types of memories. These cells make up about half of your brain, and it was previously thought they only acted as a support system for neurons. Now, scientists say these cells are responsible for helping you recognize people, places and things from the past.
Professor Terrence Sejnowski of the Salk Institute talked live on KUSI News Thursday night about what this discovery means for future treatments of neurological disorders. More ...
When something captures your interest, like this article, unique electrical rhythms called gamma oscillations sweep through your brain.
These gamma oscillations reflect a symphony of cells—both excitatory and inhibitory—playing together in an orchestrated way. Though their role has been debated, gamma waves have been associated with higher-level brain function, and disturbances in the patterns have been tied to schizophrenia, Alzheimer's disease, autism, epilepsy and other disorders.
Now, new research from the Salk Institute shows that little known supportive cells in the brain known as astrocytes may in fact be major players that control these waves. More ...
Researchers at the University of California, San Diego, are working on a breakthrough that could change how doctors treat patients and their pain. Many doctors' offices have started displaying charts with faces showing various levels of pain but what if a person is faking it? More ...
Scientists at the Salk Institute in La Jolla have created a new model of memory that explains how neurons retain select memories a few hours after an event.This new framework provides a more complete picture of how memory works, which can inform research into disorders liked Parkinson’s, Alzheimer’s, post-traumatic stress and learning disabilities.
The work is detailed in the latest issue of the scholarly journal Neuron. “Previous models of memory were based on fast activity patterns,” said Terrence Sejnowski, holder of Salk’s Francis Crick Chair and a Howard Hughes Medical Institute Investigator. “Our new model of memory makes it possible to integrate experiences over hours rather than moments.”
The start-up Emotient is a prime example of how industry, academia, and venture capital can combine to create a groundbreaking business.
The basic technology arose in UC San Diego's Machine Perception Laboratory, led by Javier R. Movellan, a Research Scientist in the Institute of Neural Computation. Movellan and his colleague Marian Bartlett pioneered the automation of facial coding using computer vision and machine learning.
Supported by entrepreneur Ken Denman and investor Seth Neiman of Crosspoint Venture Partners, the Emotient team led by Movellan has created the Emotient API, a sophisticated facial-recognition technology with applications in the health care, retail, and entertainment industries.
In retail, the Emotient API technology allows store owners to assess customer service and quickly enhance their customers' experience. In health care, the technology provides an opportunity for physicians to better engage patients through online video calls, and may help in diagnosing depression and other mental disorders. In video games, the Emotient API allows awareness of gamers' emotional and physical responses, so the content and pace of games can be changed to generate unique and personalized enhancements.
"Seth Neiman pointed me to the UC San Diego team," said Denman, now Emotient's chief executive officer. "He told me they were the most published, the most experienced, the most enthusiastic researchers doing this work."
Movellan credits the university for the smooth and efficient start-up process.
"There was a genuine will to help Emotient innovate," he said. "UC San Diego is amazing at working on interdisciplinary science. They were really innovative in helping computer scientists, psychologists, and entrepreneurs collaborate."
Denman also credits the university's Technical Transfer Office (TTO) and its associate director William Decker.
"I worked with William Decker, and I was very impressed with his knowledge, skill, and ability to get things done and to keep his commitments and be reasonable in his negotiations overall," Denman said. "I was very pleasantly surprised."
Emotient now joins the long list of start-ups – currently more than 180 -- that the TTO has helped to establish, Decker said.
"Ken Denman is an innovator, and a pleasure to work with. We hope he considers other technologies now under way at UC San Diego."
Paul K. Mueller, 858-534-8564, email@example.com
Karen Cheng, 858-822-3276, firstname.lastname@example.org
Summary: The best part about the retail sector is that it combines four fun areas: Business, technology, and human behavior and psychology. Here's a tour of what may be coming to a store near you.
The future of shopping is going to look a lot more analytical in the near future and add a good bit of video to track emotions and engagement.
Welcome to the future of retail, which is quickly moving beyond somewhat silly questions about whether tablets will run on Android, iOS, or Windows, and becoming much more focused on actual applications and sales.
The best part about the retail sector is that it combines four fun areas: Business, technology, and human behavior and psychology. Here's a brief tour of technologies that range in maturity from those that are implemented today to ones that'll take awhile to be adopted.
Many of the aforementioned technologies were pulled together by Intel, which has a large retail technology unit as well as an Internet of Things division. The two areas are increasingly merging. Intel realized long ago that analytics and the Internet of Things is going to drive a lot of server and processor sales.
Emotion tracking meets retail
At Intel's booth at the National Retail Federation annual retail powwow in New York City, one of the more popular demonstrations revolved around Emotient, a startup based in San Diego. In a nutshell, Emotient captures your facial expressions, gauges your emotions, and turns that data into actionable items for a retailer.
For instance, if a consumer walks up to a display and looks frustrated, retailers will know that something needs to be tweaked. Joy would prod the retailer to buy more of that good. Stores could send help and personnel to folks depending on their emotional response to an item.
Marian Bartlett, founder and lead scientist at Emotient, said her company's analytics software — based on artificial intelligence and pattern recognition — is being used for research as well as product testing, say an emotional response to a fragrance. Emotient launched its products in June and has pilots in fast food, automotive, and health care. In a shopping context, Emotient "aims to measure emotion and tell the store managers that someone is confused in aisle 12", said Bartlett.
My take: Emotient could have a killer app for retail, but may freak people out. The data is anonymous, but facial recognition has a bit of a Minority Report feel to it. Personally, I doubt millennials will care. Others may fret about privacy; the key for retailers and consumer product good companies will be to be transparent and dangle carrots so shoppers won't sweat revealing how they really feel about something.
Intel partner MemoMi also demonstrated a memory mirror that allows you to try on multiple outfits, compare them, and then share with peers. The mirror is controlled by hand gestures. The biggest difference with the MemoMi mirror is that it actually seemed like it would fit into a normal shopping flow. Similar displays at previous NRF shows were a bit clunky.
My take: The magic mirror approach could work and is getting closer to being rolled out at retailers. Any retailer wanting to boost engagement via multiple channels would be interested. Magic mirrors are getting there.
Kinect in retail
Microsoft displays often revolve around how Kinect could be used in retail. The software giant also highlighted a large retail Surface tabletop display that integrated personalization and the shopping experience.
My take: Microsoft has a good chance of turning Kinect into a business tech staple. The biggest challenge is that these large touch displays and virtual mirrors haven't gone mainstream. Fortunately for Microsoft, most retailers are Windows shops, so it'll have plenty of opportunities to broaden its reach.
Upscale vending machines that provide an experience
Signifi, a privately held company from Canada, was outlining "automated retail spot shops". As far as technology goes, Signifi is more of an integrator. CEO Shamira Jaffer said the company's spot shops sit "in the middle of the website and store". Jaffer's aim is to combine big screens, social media, interactive features, and ambiance to create an experience that boosts sales and serves as a marketing vehicle, too. "These don't feel like vending machines," said Jaffer. "We work with the retailer to design them."
Call it automated retailing.
Indeed, Signifi has been talking to Rolex about the possibilities for spot shops. Rolex? Sure, the company wants to brand and wants more outlets with less overhead. Signifi will provide hardware, software, and support. The retailer handles inventory and stocking the machines. Another perk: The spot shops can take returns, too. "We provide a retail store in a box," she said.
My take: These spot shops could proliferate in the US, a country that so far only has three. There are more spot shops in Europe. BMW is one brand testing the concept, and Jaffer said that locations for spot shops go beyond airports and transportation hubs. Hospitals, which have a bevy of folks waiting around, and universities are also good locations for high-end brands and spot shops.
Video analytics meets store staffing
Scopix, a company demonstrating at the Intel booth, scans a store via video, rates whether a customer is engaged, and then provides real-time data so a retailer can close a sale. The Scopix technology is also used for queue management and predictive analytics to find patterns. One use case would be that the Scopix system could ping a mobile device on a store floor so employees could keep customers happy.
My take: Scopix has interesting technology, but it's unclear how it stands out. Video surveillance and analytics tools were everywhere at the NRF conference.
We'll give you discounts to watch our TV commercial
Actv8.me demonstrated technology that used audio cues to combine mobile applications from TV networks with commercials and direct response. Here's how it would work — and has worked in a few pilots. A TV watcher is sitting on the couch with a tablet and an app from Fox (or CBS, parent of ZDNet). The app and tablet listen to the TV ad and call up products such as an outfit worn by an actress. For watching the ad, consumers would get flipped a coupon, say a 20 percent discount for a store visit and 10 percent deal for an online sale. Intel provides the ad-serving technology.
Actv8 also announced a deal with NCR to bring its personalized proximity platform to kiosks and multiple industries.
My take: Actv8 could find an audience from media giants given that the technology also works for on-demand commercials. Actv8's technology could track engagement with TV ads, maintain rates, and even add revenue shares for sales.
A gaggle of Harry Potter fans descended for several days this summer on the Oregon Convention Center in Portland for the Leaky Con gathering, an annual haunt of a group of predominantly young women who immerse themselves in a fantasy world of magic, spells and images.
The jubilant and occasionally squealing attendees appeared to have no idea that next door a group of real-world wizards was demonstrating technology that only a few years ago might have seemed as magical.
The scientists and engineers at the Computer Vision and Pattern Recognition conference are creating a world in which cars drive themselves, machines recognize people and "understand" their emotions, and humanoid robots travel unattended, performing everything from mundane factory tasks to emergency rescues.
C.V.P.R., as it is known, is an annual gathering of computer vision scientists, students, roboticists, software hackers — and increasingly in recent years, business and entrepreneurial types looking for another great technological leap forward.
The growing power of computer vision is a crucial first step for the next generation of computing, robotic and artificial intelligence systems. Once machines can identify objects and understand their environments, they can be freed to move around in the world. And once robots become mobile they will be increasingly capable of extending the reach of humans or replacing them.
Self-driving cars, factory robots and a new class of farm hands known as ag-robots are already demonstrating what increasingly mobile machines can do. Indeed, the rapid advance of computer vision is just one of a set of artificial intelligence-oriented technologies — others include speech recognition, dexterous manipulation and navigation — that underscore a sea change beyond personal computing and the Internet, the technologies that have defined the last three decades of the computing world.
"During the next decade we're going to see smarts put into everything," said Ed Lazowska, a computer scientist at the University of Washington who is a specialist in Big Data. "Smart homes, smart cars, smart health, smart robots, smart science, smart crowds and smart computer-human interactions."
The enormous amount of data being generated by inexpensive sensors has been a significant factor in altering the center of gravity of the computing world, he said, making it possible to use centralized computers in data centers — referred to as the cloud — to take artificial intelligence technologies like machine-learning and spread computer intelligence far beyond desktop computers.
Apple was the most successful early innovator in popularizing what is today described as ubiquitous computing. The idea, first proposed by Mark Weiser, a computer scientist with Xerox, involves embedding powerful microprocessor chips in everyday objects.
Steve Jobs, during his second tenure at Apple, was quick to understand the implications of the falling cost of computer intelligence. Taking advantage of it, he first created a digital music player, the iPod, and then transformed mobile communication with the iPhone. Now such innovation is rapidly accelerating into all consumer products.
"The most important new computer maker in Silicon Valley isn't a computer maker at all, it's Tesla," the electric car manufacturer, said Paul Saffo, a managing director at Discern Analytics, a research firm based in San Francisco. "The car has become a node in the network and a computer in its own right. It's a primitive robot that wraps around you."
Here are several areas in which next-generation computing systems and more powerful software algorithms could transform the world in the next half-decade.
With increasing frequency, the voice on the other end of the line is a computer.
It has been two years since Watson, the artificial intelligence program created by I.B.M., beat two of the world's best "Jeopardy" players. Watson, which has access to roughly 200 million pages of information, is able to understand natural language queries and answer questions.
The computer maker had initially planned to test the system as an expert adviser to doctors; the idea was that Watson's encyclopedic knowledge of medical conditions could aid a human expert in diagnosing illnesses, as well as contributing computer expertise elsewhere in medicine.
In May, however, I.B.M. went a significant step farther by announcing a general-purpose version of its software, the "I.B.M. Watson Engagement Advisor." The idea is to make the company's question-answering system available in a wide range of call center, technical support and telephone sales applications. The company says that as many as 61 percent of all telephone support calls currently fail because human support-center employees are unable to give people correct or complete information.
Watson, I.B.M. says, will be used to help human operators, but the system can also be used in a "self-service" mode, in which customers can interact directly with the program by typing questions in a Web browser or by speaking to a speech recognition program.
That suggests a "Freakonomics" outcome: There is already evidence that call-center operations that were once outsourced to India and the Philippines have come back to the United States, not as jobs, but in the form of software running in data centers.
A race is under way to build robots that can walk, open doors, climb ladders and generally replace humans in hazardous situations.
In December, the Defense Advanced Research Projects Agency, or Darpa, the Pentagon's advanced research arm, will hold the first of two events in a $2 million contest to build a robot that could take the place of rescue workers in hazardous environments, like the site of the damaged Fukushima Daiichi nuclear plant.
Scheduled to be held in Miami, the contest will involve robots that compete at tasks as diverse as driving vehicles, traversing rubble fields, using power tools, throwing switches and closing valves.
In addition to the Darpa robots, a wave of intelligent machines for the workplace is coming from Rethink Robots, based in Boston, and Universal Robots, based in Copenhagen, which have begun selling lower-cost two-armed robots to act as factory helpers. Neither company's robots have legs, or even wheels, yet. But they are the first commercially available robots that do not require cages, because they are able to watch and even feel their human co-workers, so as not to harm them.
For the home, companies are designing robots that are more sophisticated than today's vacuum-cleaner robots. Hoaloha Robotics, founded by the former Microsoft executive Tandy Trower, recently said it planned to build robots for elder care, an idea that, if successful, might make it possible for more of the aging population to live independently.
Seven entrants in the Darpa contest will be based on the imposing humanoid-shaped Atlas robot manufactured by Boston Dynamics, a research company based in Waltham, Massachusetts. Among the wide range of other entrants are some that look anything but humanoid — with a few that function like "transformers" from the world of cinema. The contest, to be held in the infield of the Homestead-Miami Speedway, may well have the flavor of the bar scene in "Star Wars."
Amnon Shashua, an Israeli computer scientist, has modified his Audi A7 by adding a camera and artificial-intelligence software, enabling the car to drive the 65 kilometers, or 40 miles, between Jerusalem and Tel Aviv without his having to touch the steering wheel.
In 2004, Darpa held the first of a series of "Grand Challenges" intended to spark interest in developing self-driving cars. The contests led to significant technology advances, including "Traffic Jam Assist" for slow-speed highway driving; "Super Cruise" for automated freeway driving, already demonstrated by General Motors and others; and self-parking, a feature already available from a number of car manufacturers.
Recently General Motors and Nissan have said they will introduce completely autonomous cars by the end of the decade. In a blend of artificial-intelligence software and robotics, Mobileye, a small Israeli manufacturer of camera technology for automotive safety that was founded by Mr. Shashua, has made considerable progress. While Google and automotive manufacturers have used a variety of sensors including radars, cameras and lasers, fusing the data to provide a detailed map of the rapidly changing world surround a moving car, Mobileye researchers are attempting to match that accuracy with just video cameras and specialized software.
At a preschool near the University of California, San Diego, a child-size robot named Rubi plays with children. It listens to them, speaks to them and understands their facial expressions.
Rubi is an experimental project of Prof. Javier Movellan, a specialist in machine learning and robotics. Professor Movellan is one of a number of researchers now working on a class of computers that can interact with humans, including holding conversations.
Computers that understand our deepest emotions hold the promise of a world full of brilliant machines. They also raise the specter of an invasion of privacy on a scale not previously possible, as they move a step beyond recognizing human faces to the ability to watch the array of muscles in the face and decode the thousands of possible movements into an understanding of what people are thinking and feeling.
These developments are based on the work of the American psychologist Paul Ekman, who explored the relationship between human emotion and facial expression. His research found the existence of "micro expressions" that expose difficult-to-suppress authentic reactions. In San Diego, Professor Movellan has founded a company, Emotient, that is one of a handful of start-ups pursuing applications for the technology. A near-term use is in machines that can tell when people are laughing, crying or skeptical — a survey tool for film and television audiences.
Farther down the road, it is likely that applications will know exactly how people are reacting as the conversation progresses, a step well beyond Siri, Apple's voice recognition system.
Harry Potter fans, stand by.
"As the epicenter of scientific innovation, California must take bold and prompt action to capitalize on the short- and long-term benefits of the BRAIN Initiative," said Senate Majority Leader Ellen M. Corbett (D-East Bay) at a Senate Select Committee on Emerging Technology: Biotechnology and Green Energy Jobs public hearing held Friday at UC San Diego.
The event, "A Mindful Approach to the BRAIN Initiative," was convened by Corbett. It explored the state's role in accelerating the research, development and deployment technologies to support the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative, first unveiled by the Obama Administration in April 2013.
The research effort––in which UC San Diego, "Mesa" colleagues and private-public partners will play key roles––is designed to revolutionize understanding of how the brain works and uncover new ways to treat, prevent and cure brain disorders such as Alzheimer's, schizophrenia, autism, epilepsy and traumatic brain injury.
The discussions held Friday were attended by Chancellor Pradeep K. Khosla, who also was sitting in the front row at the White House when Obama made the BRAIN Initiative announcement on April 2.
"The president's initiative is charting the next frontier of science and UC San Diego is poised and ready to help our country lead the way," said Khosla. "Neuroscience, biology, and cognitive science are among the premier areas of strength on our campus, and we are really excited to be part of the effort to gain a deep understanding of human beings and how we behave."
In response to Obama's "grand challenge," UC San Diego established the Center for Brain Activity Mapping (CBAM) in May. The new center, headed by Ralph Greenspan, is under the aegis of the interdisciplinary Kavli Institute for Brain and Mind at UC San Diego. CBAM tackles the technological and biological challenge of developing a new generation of tools to enable recording of neuronal activity throughout the brain. It will also conduct brain-mapping experiments and analyze the collected data.
"This is another example of how California is leading the way, both in terms of understanding the human mind and how we can cure Alzheimer's, dementia and other diseases, and also in creating technologies, new innovations and jobs," said Khosla.
At the hearing, Corbett, who is chair of the Select Committee, said she intends to introduce legislation early next year that supports cutting-edge research like the BRAIN Initiative that can bring societal and economic benefits to California.
"Twenty-five years ago, the Human Genome Project led to the 'genomic revolution' and advanced some of the leading industries in our state," she said, "The BRAIN Initiative is the next logical step."
At the hearing, representatives from UC San Diego were joined by other academic and industry leaders in voicing strong support of the initiative. Those testifying included Greenspan, founding director of CBAM and associate director of the Kavli Institute for Brain and Mind at UC San Diego (KIBM); Terry Sejnowski of the Salk Institute for Biological Studies and UC San Diego and director of the campus's Institute for Neural Computation; and Ramesh Rao, director of the Qualcomm Institute, the UC San Diego division of Calit2.
"The last century we went to the moon to explore outer space; this century we're exploring inner space by studying the link between brain activity and behavior," said Sejnowski. "We need to find what it is that excites young people. We need to attract bright young minds the way President John F. Kenney did … In 1969, the year we went to the moon, the average age of a NASA engineer was 27."
When asked by Corbett if the state of California was doing enough to support the education needed by the BRAIN initiative, Greenspan answered by saying that more science should be integrated into the general education curriculum. "It's is important to build STEM programs and make it accessible to students … Students need to see themselves as future scientists."
Corbett concluded the conversation by saying that she thought the discussions helped dispel the notion that people come to California only for the weather. "You come here for the education and innovation," she said.
Welcome to The Society for Music Perception and Cognition (SMPC), a scholarly organization dedicated to the study of music cognition. Use our website to learn about this rapidly growing field, including information on researchers, conferences, and student opportunities. Join us as we explore one of the most fascinating aspects of being human.
Michael J. Fox will return full time to network television September 26th, starring in an NBC sitcom that will give many viewers their first long look at Parkinson's disease, a neurodegenerative disorder that can affect a person's speech, movement and balance.
Fox, 52, was diagnosed with Parkinson's about 20 years ago. He's since become a leading advocate for expanding research on a disease that afflicts 1 million Americans — including singer Linda Ronstadt, who recently announced that she can no longer sing because of the illness.
UC San Diego has a large Parkinson's research program, part of it led by Howard Poizner, a neuroscientist who uses electroencephalography, or EEG, to measure the brain's electrical activity along the scalp.
It can be an effective tool for studying how the brain controls movement, work that's critical to finding better ways to diagnose and treat Parkinson's.
Poizner recently discussed his work with U-T San Diego, and here is an edited version of that conversation:
Q: Astronomers have telescopes that can peer billions of years back in time. Why can't scientists see the short distance through the skull into areas where Parkinson's slowly unfolds?
A: We can see into the skull, but we're looking at very weak signals. The signals that we see at the scalp are roughly a million times weaker than an AA battery. And the skull distorts the picture, making it harder for us to see what's going on. We're dealing with complexity, too. There are places in the cortex where you'll find 100,000 neurons making more than 1 trillion connections in an area about the size of the head of a pin. Signaling can occur hundreds of times a second across a vast communications network.
Q: It sounds like science has a small number of tools that give a crude look at what's happening inside the most complicated network known to humans. Is that the case?
A: There are tools that allow you to precisely examine what's happening inside the brains of animals. But things are very limited when it comes to looking at the living human brain, and we want to be noninvasive. Yet progress is being made across many areas, from the way we use EEG devices to improvements that are coming with magnetic resonance imaging (MRI). They'll give us a clearer look at brain activity, which we need. We've got to be able to separate abnormalities that are specific to Parkinson's from those that come with related diseases.
Q: You personally make heavy use of EEGs to record brain activity. Why?
|A: The EEG can be used in conjunction with other tools to help us get a better real-time look at Parkinson's, which is a chronic, progressive movement disorder. The disease can make it difficult for people to stand up, or to move their feet, or make smooth movements. And it can impair their ability to take corrective action, like grabbing a bottle of water that's been knocked over. To look at this, we put motion sensors on a patient's hands, arms, legs and other parts of their body, which helps us analyze the fine details of their movements. At the same time, we're using the EEG headset to map what's happening in their brain. We then can relate their ongoing brain activity to moment-by-moment changes in their movements. Hopefully, we'll be able to extract signatures of brain activity that are characteristic of Parkinson's in its early stages. We really need that. There's no blood test for this disease.|
Q: On his new show, Michael J. Fox moves around a lot — and he moves quickly. You're keying in more on movement, and is that a change in what researchers do?
A: You raise a very important point. Traditionally, monitoring brain activity has required people to be sitting still or lying down with their heads restrained. This has been done to prevent muscle activity from interfering with the brain recordings. But that's been changing due to advances in hardware and advances in the way EEG signals are analyzed. And we can record these signals wirelessly. We can now record people's brain activity while they're actually moving around or reacting to things that they see while wearing a virtual reality headset. This is critically important; it allows us to study how the brain acts in real-world situations.
Q: What kinds of things are you looking at?
A: I do basic research into how the brain controls movement, which is closely tied to Parkinson's. The disease affects the circuitry of the brain. It alters the timing and sequence and rhythm of the brain's communication networks. These circuits can get locked in particular rhythms, which makes it harder for people to react to stimuli.
Q: Fox is a beloved figure who will be making light of his own struggles with Parkinson's in his sitcom. Is this show likely to teach people a lot about a disease that few understand, or will viewers turn away because it is hard to watch a person with a movement disorder?
A: My gut says that people will watch the show. They relate to Michael J. Fox. They know him, in a sense. He's engaging, funny. I think they'll react to the fact that he's finding happiness and hope while living with a devastating disease. He's bringing this message into their homes through TV, and they need to hear that there's hope. We're seeing others — people like Linda Ronstadt — living with Parkinson's.
We don't yet know the precise causes of this disease. But there are a lot of excellent therapies that can help a lot of people, and these are exciting times in the world of research. This is a disease that will be cured. It won't happen, say, in the next five years. But it's going to happen.
Poizner's research is funded by the National Institutes of Health, the Office of Naval Research and the National Science Foundation.
University of California, San Diego bioengineering professor Gert Cauwenberghs has been selected by the National Science Foundation to take part in a five-year, multi-institutional, $10 million research project to develop a computer vision system that will approach or exceed the capabilities and efficiencies of human vision. The Visual Cortex on Silicon project, funded through NSF's Expeditions in Computing program, aims to create computers that not only record images but also understand visual content and situational context in the way humans do, at up to a thousand times the efficiency of current technologies, according to an NSF announcement.
Smart machine vision systems that understand and interact with their environments could have a profound impact on society, including aids for visually impaired persons, driver assistance capabilities for reducing automotive accidents, and augmented reality systems for enhanced shopping, travel, and safety.
For their part in the effort, Cauwenberghs, a professor in the Department of Bioengineering at the UC San Diego Jacobs School of Engineering, and his team are developing computer chips that emulate how the brain processes visual information. "The brain is the gold standard for computing," said Cauwenberghs, adding that computers work completely differently than the brain, acting as passive processors of information and problems using sequential logic. The human brain, by comparison, processes information by sorting through complex input from the world and extracting knowledge without direction.
While several computer vision systems today can each successfully perform one or a few human tasks-such as detecting human faces in point-and-shoot cameras-they are still limited in their ability to perform a wide range of visual tasks, to operate in complex, cluttered environments, and to provide reasoning for their decisions. In contrast, the visual cortex in mammals excels in a broad variety of goal-oriented cognitive tasks, and is at least three orders of magnitude more energy efficient than customized state-of-the-art machine vision systems.
Cauwenberghs said the Visual Cortex on Silicon project offers a unique collaborative opportunity with experts across the globe in neuroscience, computer science, nanoengineering and physics.
The project has other far-reaching implications for neuroscience research. By developing chips that can function more like the human brain, Cauwenberghs believes researchers can achieve a number of significant breakthroughs in our understanding of brain function from the work of single neurons all the way up to a more holistic view of the brain as a system. For example, building chips that model different aspects of brain function, such as how the brain processes visual information, gives researchers a more robust tool to understand where problems arise that contribute to disease or neurological disorders.
|The Expeditions in Computing program, which started in 2008, represents NSF's largest single investments in computer science research. As of today, 16 awards have been made through this program, addressing subjects ranging from foundational research in computing hardware, software and verification to research in sustainable energy, health information technology, robotics, mobile computing, and Big Data.|
The brain is the most complex device in the known universe. With 100 billion neurons connected by a quadrillion synapses, it's like the world's most powerful supercomputer on steroids. To top it all off, it runs on only 20 watts of power… about as much as the light in your refrigerator.
These were a few of the introductory ideas discussed by Terrence Sejnowski, Director of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies, a co-director of the Institute for Neural Computation at UC San Diego, an investigator with the Howard Hughes Medical Institute and a member of the advisory committee to the director of National Institutes of Health (NIH) for the BRAIN (Brain Research through Application of Innovative Neurotechnologies) Initiative, which was launched in April 2013.
"I was in the White House when the program was announced," Sejnowski recalled. "It was very exciting. The President was telling me that my life's work was going to be a national priority over the next 15 years."
At that event, the NIH, the National Science Foundation, and the Defense Advanced Research Projects Agency announced their commitment to dedicate about $110 million for the first year to develop innovative tools and techniques that will advance brain studies, which will ramp up as the Initiative gains ground.
In a recent talk in San Diego at the XSEDE13 conference — the annual meeting of researchers, staff and industry who use and support the U.S. cyberinfrastructure — Sejnowski described the rapid progress that neuroscience has made over the last decade and the challenges ahead. High-performance computing, visualization and data management and analysis will play critical roles in the next phase of the neuroscientific revolution, he said.
A deeper understanding of the brain would advance our grasp of the processes that underlie mental function. Ultimately it may also help doctors comprehend and diagnose mental illness and degenerative diseases of the brain and possibly even intervene to prevent these diseases in the future.
"Not only can we understand what happens when the brain is functioning normally, maybe we can understand what's happening when it's not functioning right, as in mental disorders," he said.
Currently, this dream is a long way off. Brain activity occurs at all scales from the atomic to the macroscopic level, and each behavior contributes to the working of the brain. Sejnowski explained the challenge of understanding even a single aspect of the brain by showing a series of visualizations that illustrated just how interwoven and complex the various components of the brain are.
One video [pictured below] examined how the axons, dendrites and other components fit together in a small piece of the brain, called the neuropil. He likened the structure to "spaghetti architecture." A second video showed what looked like fireworks flashing across many regions of the brain and represented the complex choreography by which electrical signals travel in the brain.
Despite the rapid rate of innovation, the field is still years away from obtaining a full picture of a mouse's or even a worm's brain. It would require an accelerated rate of growth to reach the targets that neuroscientists have set for themselves. For that reason, the BRAIN Initiative is focusing on new technologies and tools that could have a transformative impact on the field.
"If we could record data from every neuron in a circuit responsible for a behavior, we could understand the algorithms that the brain uses," Sejnowski said. "That could help us right now."
Larger, more comprehensive and capable supercomputers, as well as compatible tools and technologies, are needed to deal with the increasing complexity of the numerical models and the unwieldy datasets gleaned by fMRI or other imaging modalities. Other tools and techniques that Sejnowski believes will be required include industrial-scale electron microscopy; improvements in optogenetics; image segmentation via machine learning; developments in computational geometry; and crowd sourcing to overcome the "Big Data" bottleneck.
"Terry's talk was very inspiring for the XSEDE13 attendees and the entire XSEDE community," said Amit Majumdar, technical program chair of XSEDE13. Majumdar directs the scientific computing application group at the San Diego Supercomputer Center (SDSC) and is affiliated with the Department of Radiation Medicine and Applied Sciences at UC San Diego. "With XSEDE being the leader in research cyberinfrastructure, it was great to hear that tools and technologies to access supercomputers and data resources are a big part of the BRAIN Initiative."
For his part, over the past decade Sejnowski led a team of researchers to create two software environments for brain simulations, called MCell (or Monte Carlo Cell) and Cellblender. MCell combines spatially realistic 3D models of the geometry of the brain (as determined by brain scans and computational analysis), and simulates the movements and reactions of molecules within and between brain cells—for instance, by populating the brain's 3D geometry with active ion channels, which are responsible for the chemical behavior of the brain. Cellblender visualizes the output of MCell to help computational biologists better understand their results.
Researchers at the Pittsburgh Supercomputing Center, the University of Pittsburgh, and the Salk Institute developed these software packages collaboratively with support from the National Institutes of Health, the Howard Hughes Medical Institute, and the National Science Foundation. The open-source software runs on several of the XSEDE-allocated supercomputers and has generated hundreds of publications.
MCell and Cellblender are a step in the right direction, but they will be stretched to their limits when dealing with massive datasets from new and emerging imaging tools. "We need better algorithms and more computer systems to explore the data and to model it," Sejnowski said. "This is where the insights will come from — not from the sheer bulk of data, but from what the data is telling us."
Supercomputers alone will not be enough either, he said. An ambitious, long-term project of this magnitude requires a small army of students and young professional to progress.
Sejnowski likened the announcement of the BRAIN Initiative to the famous speech where John F. Kennedy vowed to send an American to the moon. When Neil Armstrong landed on the moon eight years later, the average age of the NASA engineers that sent him there was 26-years-old. Encouraged by JFK's passion for space travel and galvanized by competition from the Soviet Union, talented young scientists joined NASA in droves. Sejnowski hopes the same will be true for the neuroscience and computational science fields.
"This is an idea whose time has come," he said. "The tools and techniques are maturing at just the right time and all we need is to be given enough resources so we can scale up our research."
The annual XSEDE conference, organized by the National Science Foundation's Extreme Science and Engineering Discovery Environment (xsede.org) with the support of corporate and non-profit sponsors, brings together the extended community of individuals interested in advancing research cyberinfrastructure and integrated digital services for the benefit of science and society. XSEDE13 was held July 22-25 in San Diego; XSEDE14 will be held July 13-18 in Atlanta. For more information, visit https://conferences.xsede.org/xsede14
SCCN researchers receive the "Best Paper Award" at the International Neurotechnology Consortium Workshop at the 2013 International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC'13).
Tim Mullen, UC San Diego for Real-Time Estimation and 3D Visualization of
Source Dynamics and Connectivity Using Wearable EEG
Responding to President Barack Obama's "grand challenge" to chart the function of the human brain in unprecedented detail, the University of California, San Diego has established the Center for Brain Activity Mapping (CBAM). The new center, under the aegis of the interdisciplinary Kavli Institute for Brain and Mind at UC San Diego, will tackle the technological and biological challenge of developing a new generation of tools to enable recording of neuronal activity throughout the brain. It will also conduct brain-mapping experiments and analyze the collected data.
Ralph Greenspan–one of the original architects of a visionary proposal that eventually led to the national BRAIN Initiative launched by President Obama in April–has been named CBAM's founding director.
UC San Diego Chancellor Pradeep K. Khosla, who attended Obama's unveiling of the BRAIN Initiative, said: "I am pleased to announce the launch of the Center for Brain Activity Mapping. This new center will require the type of in-depth and impactful research that we are so good at producing at UC San Diego. We have strengths here on our campus and the Torrey Pines Mesa, both in breadth of talent and in the scientific openness to collaborate across disciplines, that few others can offer the project."
Greenspan, who also serves as associate director of the Kavli Institute for Brain and Mind at UC San Diego, said CBAM will focus on developing new technologies necessary for global brain-mapping at the resolution level of single cells and the timescale of a millisecond, participate in brain mapping experiments, and develop the necessary support mechanisms for handling and analyzing the enormous datasets that such efforts will produce.
Brain-mapping discoveries made by CBAM may shed light on such brain disorders as autism, traumatic brain injury and Alzheimer's–and could potentially point the way to new treatments, Greenspan said. The technologies developed and advances in understanding brain networks will also likely have industrial applications outside of medicine, he said.
The new center will bring together researchers from neuroscience (including cognitive science, psychology, neurology and psychiatry), engineering, nanoscience, radiology, chemistry, physics, computer science and mathematics.
"An essential component of the center will be its close relationships with other San Diego research institutions and with industrial partners in the region's hi-tech and biotech clusters," said Nick Spitzer, distinguished professor of neurobiology and director of the Kavli Institute for Brain and Mind at UC San Diego.
Beyond bringing researchers together, the center will seek the resources to support specific projects. Some of these projects will likely build on existing research at UC San Diego while others will be brand new, growing out of the novel collaborations that CBAM will encourage and nurture.
The center aims to compete for national grant funds but will also seek to pursue projects with the help of philanthropists and industry partners.
Administratively, CBAM will be part of the interdisciplinary Kavli Institute for Brain and Mind. The Qualcomm Institute at UC San Diego (formerly known as Calit2) will support CBAM with some initial space for collaborative projects.
Greenspan will soon assemble a director's council, to help guide the center's scientific program, and an advisory board, to assist on general strategy and fundraising.
Greenspan authored the proposal for CBAM with Spitzer and Terry Sejnowski, director of UC San Diego's Institute for Neural Computation, who holds joint appointments with UC San Diego and The Salk Institute.
The trio identified the center's immediate goal as preparing CBAM to compete effectively for federal BRAIN initiative funding. Activities will include, for example, topic-oriented meetings and workshops to identify potential project areas.
Medium-term goals include providing seed-grant support for specific projects, building strong ties among scientists from the different relevant disciplines, and creating an outreach program. The center will also seek dedicated space on campus.
In the long term, CBAM hopes to create an endowment for stable support of the most promising projects and to facilitate the formation of new start-up companies.
"We have the capability and the atmosphere here to make some major advances on the BRAIN Initiative," Greenspan said. "We are among the best-positioned places anywhere to make a significant contribution to the president's challenge.
"We invite members of the scientific and philanthropic communities – here in San Diego and further afield," he said, "to join with us on this vital quest."
A UC San Diego study of the impact of music training on the brain and behavioral development in children has been awarded a grant of nearly $20,000 by the Grammy Foundation.
"SIMPHONY," the grant award says, "is a unique collaboration designed to understand how music training affects children's brains and the general cognitive skills like language and attention. It is the first study of its kind, and will track 60 children annually starting at ages 5-10 as they engage in ensemble music training using an extensive battery of neural and behavioral testing."
SIMPHONY, short for Studying the Impact Music Practice Has On Neurodevelopment in Youth, is a collaboration among researchers at UC San Diego's Center for Human Development and the Institute for Neural Computation.
"We're grateful for the Grammy Foundation's support of this important research," said John Iversen, who directs the five-year longitudinal study in close collaboration with Terry Jernigan, director of the Center for Human Development.
"Our partnership with the San Diego Youth Symphony's Community Opus program, which brings intensive music training to elementary schools in Chula Vista, will give us valuable insights into the effect of musical training on developing brains, as well as better understanding of the links between brain structure and behavior."
The study includes not only a non-music-learning control group, said Iversen, but also another control group of students studying martial arts, which can help pinpoint any benefits due to musical training as opposed to more general enrichment activities.
The Grammy Foundation award of nearly $20,000 will enable the project to retain an older cohort of music students within the study, and will support their testing for two additional years.
Jernigan, the project's lead neuro-imaging researcher, is a leader in the field of child neuro-development; Iversen and co-investigator Aniruddh Patel (now at Tufts University) are noted for their research into music cognition and the cognitive relationship between music and language.
The Grammy Foundation Grant Program, funded by the Recording Academy, provides annual funding to organizations and individuals to advance the archiving and preservation of the recorded sound heritage of the Americas, as well as research projects related to the impact of music on the human condition.
Organized Research Units – including the Institute for Neural Computation and the Center for Human Development – make up a significant portion of the university's billion-dollar research enterprise, and contribute to UC San Diego's pioneering interdisciplinary and multidisciplinary leadership.
LA JOLLA, CA—Salk neuroscientist Terrence J. Sejnowski joined President Barack Obama in Washington, D.C., on April 2, 2013, at the launch of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative—a major Administration neuroscience effort that advances and builds upon collaborative scientific work by leading brain researchers such as Salk's own Sejnowski.
"We have the chance to improve the lives of not just millions, but billions of people on this planet," said the president, "It will require us to embrace the spirit of discovery that made America—America."
|Terrence J. Sejnowski
Professor and Head of Computational Neurobiology Laboratory, Howard Hughes Medical Institute Investigator, Francis Crick Chair
Courtesy of the Salk Institute for Biological Studies
In his introductory remarks, National Institutes of Health Director Francis Collins, dubbed Obama the "Scientist in Chief," and said, "Asking the people in this room to delay innovation would be like asking the cherry trees to stop blooming."
Obama compared the BRAIN Initiative to the Human Genome Project, which mapped the entire human genome and ushered in a new era of genetics-based medicine. "Every dollar spent on the human genome has returned $140.00 to our economy," the president said. Instead of charting genes, BRAIN will help visualize the brain activity directly involved in such vital functions as seeing, hearing and storing memories, a crucial step in understanding how to treat diseases and injuries of the nervous system.
The BRAIN Initiative is launching with approximately $100 million in funding for research supported by the National Institutes of Health (NIH), the Defense Advanced Research Projects Agency (DARPA), and the National Science Foundation (NSF) in the President's Fiscal Year 2014 budget.
Foundations and private research institutions are also investing in the neuroscience that will advance the BRAIN Initiative. Along with the Salk Institute, they include The Allen Institute for Brain Science, the Kavli Foundation, and The Howard Hughes Medical Institute.
"This initiative is a boost for the brain like the Human Genome Project was for the genes," says Sejnowski, the Francis Crick Chair and head of the Computational Neurobiology Laboratory at Salk. "This is the start of the million neuron march."
The BRAIN initiative and its focus on leveraging emerging technologies dovetails with the Salk Institute's Dynamic Brain Initiative, a neuroscience initiative focused on providing a better understanding of the brain, spinal cord and peripheral nervous system. The Salk Institute itself is home to several pioneering tool builders, among them Edward M. Callaway, already famous among systems neuroscientists for using a modified rabies virus to trace neuronal connections in the visual system.
"Scientists have known since the time of Galileo that new tools can open up whole new lines of research," says Callaway, holder of the Audrey Geisel Chair in Biomedical Science. "But for us, tools aren't just mechanical instruments, they can be viruses, genes, chemical dyes, or even photons."
Tools are also mathematical, explains Sejnowski. "When you are trying to understand the electrical and chemical interactions of millions of brain cells, you are looking at a multi-dimensional problem, which can only be solved by computational modeling," he says. "My lab has as many mathematicians and physicists and engineers as it has biologists."
Summing up his excitement over the promise of BRAIN, Sejnowski says, "Imagine how it must have felt to be a rocket engineer when Kennedy said we would reach for the moon. You know there's an almost unimaginable amount of hard work ahead of you—and yet you can't wait to get started."
The initiative builds on discussions between a group of leading neuroscientists and nanotechnologists from around the country, including Sejnowski. The scientists published an article on the topic in the March 15 issue of Science, in which they noted that the Human Genome Project yielded $800 billion in economic impact from a $3.8 billion investment—and that a similar neuroscience initiative could expect to produce similar returns.
President Obama emphasized the impact of the genome-mapping project in his February 2013 State of the Union address and the importance of neuroscience for addressing human diseases. "Today, our scientists are mapping the human brain to unlock the answers to Alzheimer's," he said. "Now is the time to reach a level of research and development not seen since the height of the Space Race."
Sejnowski says BRAIN could ultimately help reduce the overwhelming costs for treatment and long-term care of brain-related disorders, which Price Waterhouse Coopers estimated at $515 billion for the United States alone in 2012.
"Many of the most devastating human brain disorders, such as depression and schizophrenia, only seem to emerge when large-scale assemblies of neurons are involved," says Sejnowski. "Other terrible conditions, such as blindness and paralysis, result from disruptions in circuit connections. The more precise our information about specific circuits, the more we will understand what went wrong, where it went wrong, and how to target therapies."
Computational neuroscience, a field Sejnowski helped establish, will be a central avenue of research advanced under the new Initiative. One of only ten living individuals to have been elected to three branches of the National Academies—National Academy of Sciences, National Academy of Engineering and Institute of Medicine—Sejnowski co-authored 23 Problems in Systems Neuroscience, a foundational book that lays out many of the questions BRAIN is aiming to answer.
Computational neuroscience focuses on understanding how a circuit of hundreds to thousands of brain cells, which includes neurons, as well as associated cells, such as astrocytes, allows us to do something as simple as reaching out a hand or as complex as processing rich visual information. The only way to fully understand systems, such as olfaction or vision, is to map and probe the entire circuit, which is exactly what the BRAIN proposes to do.
"We're not jumping in and mapping the entire active human brain," says Sejnowski, "But we are at a point where we can develop the tools to map entire circuits, first in invertebrates and eventually in mammals."
In fact, part of the reason that the neuroscience field is now gaining momentum is that advances in engineering and physics are allowing scientists to develop incredibly tiny tools to explore the molecular world of living cells. It is no accident, says Sejnowski, that the Science paper included a cadre of nanotechnology pioneers as coauthors. "It's like wishing for a faster car, and finding out that engineers from Bugatti and Lotus are offering to help," Sejnowski says of the cross-disciplinary collaboration.
New tools that will be developed under BRAIN will push the cutting-edge even further, enabling scientists to look at the brain with better spatial and temporal resolution, as well as analyze the millions of bits of accumulated data.
About the Salk Institute for Biological Studies:
The Salk Institute for Biological Studies is one of the world's preeminent basic research institutions, where internationally renowned faculty probe fundamental life science questions in a unique, collaborative, and creative environment. Focused both on discovery and on mentoring future generations of researchers, Salk scientists make groundbreaking contributions to our understanding of cancer, aging, Alzheimer's, diabetes and infectious diseases by studying neuroscience, genetics, cell and plant biology, and related disciplines.
Faculty achievements have been recognized with numerous honors, including Nobel Prizes and memberships in the National Academy of Sciences. Founded in 1960 by polio vaccine pioneer Jonas Salk, M.D., the Institute is an independent nonprofit organization and architectural landmark.
President Obama announces BRAIN Initiative in which UC San Diego, 'Mesa' colleagues and private-public partners will play key roles
President Barack Obama is introduced by Dr. Francis Collins, Director, National Institutes of Health, at the BRAIN Initiative event in the East Room of the White House, April 2, 2013. (Official White House Photo by Chuck Kennedy) The President of the United States gathered together on April 2 "some of the smartest people in the country, some of the most imaginative and effective researchers in the country," he said, to hear him announce a broad and collaborative research initiative designed to revolutionize our understanding of the brain.
The BRAIN Initiative, short for Brain Research through Advancing Innovative Neurotechnologies, is launching with approximately $100 million in proposed funding in the president's Fiscal Year 2014 budget. It aims to advance the science and technologies needed to map and decipher brain activity.
Sitting in the front row for the announcement were three University of California chancellors, including UC San Diego's Pradeep K. Khosla.
Chancellor Khosla was accompanied at the White House by Ralph Greenspan, associate director of the Kavli Institute for Brain and Mind at UC San Diego (KIBM); Terry Sejnowski of the Salk Institute for Biological Studies and UC San Diego, director of campus's Institute for Neural Computation; KIBM Director Nick Spitzer, distinguished professor of neurobiology in the Division of Biological Sciences, and Dr. Dilip V. Jeste, Estelle and Edgar Levi Chair in Aging, distinguished professor of psychiatry and neurosciences at UC San Diego School of Medicine, and director of the Stein Institute.
"As humans, we can identify galaxies light years away, we can study particles smaller than an atom," President Barack Obama said. "But we still haven't unlocked the mystery of the three pounds of matter that sits between our ears."
The human brain, Obama pointed out, has some 100 billion neurons and trillions of connections between them. Right now, he said, borrowing a musical metaphor from National Institutes of Health Director Francis Collins, we can make out just the string section of the orchestra. The BRAIN Initiative aims to make it possible to hear the entire symphony.
Speaking from D.C., Chancellor Khosla said he was struck by the levels of energy and enthusiasm during the president's speech and in the community afterwards.
"The president's initiative is charting the next frontier of science," Khosla said, "and UC San Diego is poised and ready to help our country lead the way. Neuroscience, biology, and cognitive science are among the premier areas of strength on our campus, and we are really excited to be part of the effort to gain a deep understanding of human beings and how we behave.
"We anticipate our scientists will continue to play key roles in this great endeavor," Khosla said, "Researchers from UC San Diego—in collaboration with colleagues at Salk and others on the Torrey Pines Mesa—will be involved in almost all areas of the BRAIN initiative, from those in the sciences and engineering who will help to draw the brain-activity map to those in social sciences who will help to read the map, figuring out how brain activity translates into cognition."
In addition to research funding support from three federal agencies—the National Institutes of Health, the Defense Advanced Research Projects Agency, and the National Science Foundation—the BRAIN Initiative is also supported by financial commitments from the private sector. These include longtime UC San Diego partners the Salk Institute and the Kavli Foundation.
The Kavli Foundation and the Kavli Institute for Brain and Mind, said KIBM's Greenspan, played important roles in sparking the BRAIN Initiative.
The audacious idea of a comprehensive brain activity map was first discussed at a seminal meeting in September 2011 of 13 neuroscientists and 14 nanoscientists at the Kavli Royal Society International Centre outside of London. Greenspan was one of the leaders to flesh out the idea and draft a white paper. He also credits in particular Miyoung Chun, the Kavli Foundation's vice president of science programs. She was key, he said, first in connecting up with the White House Office of Science and Technology Policy and then keeping "us all on track in developing and expanding the idea over the next year and a half."
He is—along with Chun and colleagues from Berkeley, Caltech, Harvard and Columbia–one of the original six architects of the catalytic proposal published in Neuron in June 2012.
Also speaking from D.C., right after attending the White House announcement, Greenspan– who recalls "falling off his chair" when he heard the president reference brain mapping in his 2013 State of the Union Address and then recalls falling off again when the New York Times' John Markoff broke the story in February, far ahead of the official announcement– said: "I still think it's unbelievable it's come to pass. It's a miracle of the right idea of falling on fertile ground at the right time.
"We're at a crossroads in the history of neuroscience and the history of nanoscience," Greenspan said. "We're at a stage now where a marriage of the two can create the synergy we've dreamed about but so far hasn't been possible."
Jeff Elman, dean of the Division of Social Sciences at UC San Diego, echoed Greenspan's sentiments. Elman is a co-founder of UC San Diego's department of cognitive science, the first of its kind in the world, and co-director emeritus of the KIBM, launched at UC San Diego in 2004 to support interdisciplinary research ranging from "the brain's physical and biochemical machinery to the experiences and behaviors called the mind."
"Ten years ago, comprehensively mapping human brain activity would have been fanciful. The technology required would have seemed like science fiction," Elman said. "Today, the technology and the goal appear to be within our grasp."
Khosla, Greenspan and Elman all agreed with President Obama's comment during his speech at the White House that this is just a beginning—a very exciting beginning to an effort that will yield many positive consequences.
In his speech, Obama stressed the importance of ideas and innovation to the U.S. economy and reminded the nation that it is critical to invest in basic research. Understanding the brain's complex circuits of neurons, and the behaviors to which these give rise, he said, will eventually lead to treatments for brain disorders, such as Alzheimer's or autism, but it will also result in applications we can't even imagine yet.
He compared BRAIN to the Human Genome Project, which enabled scientists to map the entire human genome and helped create not only jobs but also a whole new era of genomic medicine. And he characterized it as one of his administration's "Grand Challenges for the 21st Century," ambitious but achievable goals like "making solar energy as cheap as coal or making electric vehicles as affordable as the ones that run on gas."
Obama said, "We have a chance to improve the lives of not just millions, but billions of people on this planet through the research that's done in this BRAIN initiative alone. But it's going to require a serious effort, a sustained effort. And it's going to require us as a country to embody and embrace that spirit of discovery that made America – America."
"Let's get to work," he said in closing.
Could learning music help children with attention disorders? New research suggests playing a musical instrument improves the ability to focus attention.
To the musical ear, life has a rhythm comparable to grand opera or simple folk tunes. Our ability to understand that rhythm and synchronise with each other is at the core of every human interaction.
That's why researchers in San Diego believe that learning to play musical instruments can help us focus attention and improve our ability to interact with the world around us.
For more than a year, children at the city's Museum School have been taking part in an experiment involving Gamelan, a percussion style of ensemble music from Indonesia that emphasizes synchronicity.
Sensors attached to the instruments monitor the children's ability to hit the beat precisely. The data is analyzed and a mathematical algorithm is used to determine a base measurement of their accuracy. That measurement is then compared to the results of behavioural and cognitive tests, and assessments by teachers and parents.
"So far, we've found a correlation between their ability to synchronise and their performance on cognitive tests," says Alexander Khalil, head of the Gamelan Project, funded by the National Science Foundation.
"What this could mean, is that learning to time in a group setting with other people musically, could improve your ability to focus attention."
Khalil began the research after several years of noticing that children who lacked the ability to synchronize also struggled to pay attention during other activities. As their musical ability improved, so did their attention.
"It is possible that music practice could become a non-pharmacological intervention for problems such as ADHD (attention deficit-hyperactivity disorder). We haven't tested it yet but it's a possibility - and an exciting possibility," he says.
ADHD is a neurobehavioral disorder that affects one in 10 children in the US. They have problems paying attention, controlling impulsive behaviour and can be overly active. It can't be cured but the symptoms can be managed - often with medication.
It's thought music might help such children because our sense of timing affects so much of our behaviour.
"The ability to time, to synchronise with others underlies all face to face communication," says Khalil. "People imagine that synchronizing is doing something simultaneously. But synchronizing actually means processing time together - perceiving time together in such a way that we have this common understanding of how time is passing."
Music offers many different layers and levels of time, from the milliseconds it takes to gauge a series of beats, to the minutes of a musical phrase or fragment and the hours of a full performance.
A study participant wears headgear that allows researchers to monitor his brain activity while moves to music.
"By learning music, one of the things you learn is rhythm and how to be aware of the temporal dynamic of the world around you and how to keep your attention focused on all of these things while you do what you do."
The Gamelan Project is part of a growing body of research into the effects of music on the brain. New imaging technology is making it possible to discover how different areas of brain function are connected.
"Having these ways to look into the brain gives us a tool that we can then use to study the effects of music on the growth and development of the brain," says Professor John Iversen of the Institute of Neural Computation at the University of California San Diego.
He's heading the Symphony Project, one of the first longitudinal studies of its kind on the effects of musical training on brain development.
"There's always this nature/nurture question - are musicians' brains different because of music, or are the people with that kind of brain the ones that stuck with music because they're good at it?" says Iversen.
"To really understand whether it's music making these brain changes, you have to study someone as they begin to learn music and as they continue learning music. Then you can see how their brain develops and compare that with children not doing music."
It could be five years before any results of the study are known but scientists are already speculating that it could have far-reaching implications for musical training.
"What if we have some kids that are intensively studying music and we find that their brains grow at an accelerated rate?" says Iversen.
"The more you work out, the bigger your muscles get. The brain may work somewhat like that as well. The more you practice the stronger the circuits will become."
Paula Tallal is co-director of the Center for Molecular and Behavioural Neuroscience at Rutgers University. She spent her career studying how children use time to process speech. She says people who have had musical training have been shown to have superior processing skills.
"What we don't know is whether there is something common to musical training that is common to attention, sequencing (processing the order in which something occurred), memory and language skills," she says.
"We know from multiple studies that children who have musical training do better at school. We don't need further research to show that. What we're interested in from a scientific perceptive is why that occurs. What neural mechanisms are being driven by musical experience and how do they interact with other abilities."
She says this research is ever more critical because many schools facing budget shortfalls are cutting music programmes.
"We're creating an impoverishment that nobody understands the long term effects of," she warns.
Media Contact: Paul K. Mueller, 858.534.8564
This month, the Army is conducting a review of its basic research activities. Of all the activities across the Army Research Laboratory (ARL)'s directorates, the only success story being highlighted is the effort by Cognition and Neuroergonomics Collaborative Technology Alliance (CaN CTA). The following is an excerpt from the report:
"The past five years have seen an explosion in the research and development of systems that use online brain-signal measurement and processing to enhance human interactions with computing systems, their environments, and even other humans. These neurosciencebased systems, or "neurotechnologies," are poised to dramatically change the way users interact with technology."
"A team of researchers in the Army's CaN CTA recently published a special section, "Neurotechnological Systems: The Brain- Computer Interface," comprised of four manuscripts that appeared in the special 2012 Centennial Celebration issue of the Proceedings of the IEEE — the most highly-cited general interest journal in electrical engineering, electronics, and computer science."
"In this special section, researchers from UCSD, the National Chiao Tung University (Taiwan), and the Army Research Laboratory, collaborated closely to define a vision of the evolution of the measurement capabilities, the analytic approaches, and the potential user applications, for the future of neurotechnologies in the coming decades. The involvement of CaN CTA researchers in this special section gave the Army an opportunity to help shape the future of a critical technology development domain, and demonstrates the recognition of the Army as a leader in this emerging field."
Diego-san's hardware was developed by leading robot manufacturers: the head by Hanson Robotics, and the body by Japan's Kokoro Co. The project is led by University of California, San Diego full research scientist Javier Movellan.
Movellan directs the Institute for Neural Computation's Machine Perception Laboratory, based in the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2). The Diego-san project is also a joint collaboration with the Early Play and Development Laboratory of professor Dan Messinger at the University of Miami, and with professor Emo Todorov's Movement Control Laboratory at the University of Washington.
Movellan and his colleagues are developing the software that allows Diego-san to learn to control his body and to learn to interact with people.
"We've made good progress developing new algorithms for motor control, and they have been presented at robotics conferences, but generally on the motor-control side, we really appreciate the difficulties faced by the human brain when controlling the human body," said Movellan, reporting even more progress on the socialinteraction side. "We developed machine-learning methods to analyze face-to-face interaction between mothers and infants, to extract the underlying social controller used by infants, and to port it to Diego-san. We then analyzed the resulting interaction between Diego-san and adults." Full details and results of that research are being submitted for publication in a top scientific journal.
While photos and videos of the robot have been presented at scientific conferences in robotics and in infant development, the general public is getting a first peak at Diego-san's expressive face in action. On January 6, David Hanson (of Hanson Robotics) posted a new video on YouTube.
"This robotic baby boy was built with funding from the National Science Foundation and serves cognitive A.I. and human-robot interaction research," wrote Hanson. "With high definition cameras in the eyes, Diego San sees people, gestures, expressions, and uses A.I. modeled on human babies, to learn from people, the way that a baby hypothetically would. The facial expressions are important to establish a relationship, and communicate intuitively to people."
Diego-san is the next step in the development of "emotionally relevant" robotics, building on Hanson's previous work with the Machine Perception Lab, such as the emotionally responsive Albert Einstein head.
“[Diego-san] brings together researchers in developmental psychology, machine learning, neuroscience, computer vision and robotics.”
— Javier Movellan
The video of the oversized android infant was picked up by the popular online technology magazine, Gizmag, with a Jan. 7 article titled "UCSD's robot baby Diego-san appears on video for the first time," written by Jason Falconer.
In his article, Falconer writes that Diego-san is "actually much larger than a standard one year old – mainly because miniaturizing the parts would have been too costly. It stands about 4 feet 3 inches (130cm) tall and weighs 66 pounds (30kg), and its body has a total of 44 pneumatic joints. Its head alone contains about 27 moving parts."
The robot is a product of the "Developing Social Robots" project launched in 2008. As outlined in the proposal, the goal of the project was "to make progress on computational problems that elude the most sophisticated computers and Artificial Intelligence approaches, but that infants solve seamlessly during their first year of life."
For that reason, the robot's sensors and actuators were built to approximate the levels of complexity of human infants, including actuators to replicate dynamics similar to those of human muscles. The technology should allow Diego-san to learn and autonomously develop sensory-motor and communicative skills typical of one-year-old infants.
"Its main goal is to try and understand the development of sensory motor intelligence from a computational point of view," explained principal investigator Movellan in a 2010 Q&A with the Japan-based PlasticPals blog. "It brings together researchers in developmental psychology, machine learning, neuroscience, computer vision and robotics. Basically we are trying to understand the computational problems that a baby's brain faces when learning to move its own body and use it to interact with the physical and social worlds."
The researchers are interested in studying Diegosan's interaction with the physical world via reaching, grasping, etc., and with the social world through pointing, smiling and other gestures or facial expressions.
As outlined in the original proposal to the NSF, the project is "grounded in developmental research with human infants, using motion capture and computer vision technology to characterize the statistics of early physical and social interaction. An important goal is to foster the conceptual shifts needed to rigorously think, explore, and formalize intelligent architectures that learn and develop autonomously by interaction with the physical and social worlds."
According to UCSD's Movellan, the expression recognition technology his team developed for Diego-san has spawned a startup called Machine P
erception Technologies (MPT). The company is currently looking for undergraduate interns and postgraduate programmers. "We like UCSD students because they tend to have a strong background in machine learning."
The project may also open new avenues to the computational study of infant development and potentially offer new clues for the understanding of developmental disorders such as autism and Williams syndrome.
As noted in the Gizmag article, Diego-san won't be the only child-like robot for long. This spring Swiss researchers will demonstrate their nearly 4-foot-tall Roboy robot toddler (with a face selected via a Facebook contest!).
The above story is reprinted from materials provided by UCSD News Center. The original article was written by Doug Ramsey.
By Kim McDonald | May 24, 2012
Schematic of cooperative brain centers interactiing to produce functional neural behavior associated with learning and decision making.
An interdisciplinary team of scientists at UC San Diego composed of physicists, biologists, chemists, bioengineers and psychologists has received a five-year, $7 million grant from the U.S. Department of Defense to investigate the dynamic principles of collective brain activity.
The innovative research effort, which is being funded by the Office of Naval Research under the Defense Department's MultiUniversity Research Initiative, or MURI, will also involve scientists at UC Berkeley and the University of Chicago.
The team plans to conduct basic research on how collective action in the brain learns, modulates and produces coherent functional neural activity for coordinated behavior of complex systems.
"This research will tie together theoretical ideas, hardware implementation of structural models and experimental investigations of human and animal behavior to develop a quantitative understanding and a predictive language for discussing complex physical and biological systems," said Henry Abarbanel, a physics professor at UC San Diego who is heading the collaboration.
The grant will pay for the costs of new laboratory facilities at UC San Diego and the University Chicago, create powerful parallel computing capabilities for the three universities involved and employ 10 or more postdoctoral research fellows. Key UC San Diego researchers participating in the effort are Katja Lindenberg, professor of chemistry and biochemistry; Tim Gentner, associate professor of psychology; Gert Cauwenberghs, professor of bioengineering; Misha Rabinovich, research physicist in the BioCircuits Institute; and Terry Sejnowski, professor of biology.
This is the fourth MURI award led by Abarbanel. The first focused on theory and experiment in complex fluid flows and was funded by the Defense Advanced Research and Projects Agency from 1988 to 1993. The second investigated chaotic communications strategies from 1998 to 2003 under sponsorship by the Army Research Office. The third developed advanced chemical sensing methodologies using animal olfactory dynamics and was funded by the Office of Naval Research from 2007 to 2012.
Kim McDonald, 858-534-7572, email@example.com
Joshua A. Chamot, NSF (703) 292-7730 firstname.lastname@example.org
Sohi Rastegar, NSF (703) 292-8305 email@example.com
Cecile J. Gonzalez, NSF (703) 292-8538 firstname.lastname@example.org
See original article here...
September 28, 2011
The National Science Foundation (NSF) Office of Emerging Frontiers in Research and Innovation (EFRI) has announced 14 grants for the 2011 fiscal year, awarding nearly $28 million to 60 investigators at 23 institutions.
During the next four years, teams of researchers will pursue transformative, fundamental research in two emerging areas: technologies that build on understanding of biological signaling and machines that can interact and cooperate with humans.
Results from this research promise to impact human health, the environment, energy, robotics and manufacturing.
Simulating the brain to improve motor control
The project "Distributed Brain Dynamics in Human Motor Control" (1137279) will be led by Gert Cauwenberghs, with colleagues Kenneth Kreutz-Delgado, Scott Makeig, Howard Poizner, and Terrence Sejnowski, all from the University of California at San Diego.
The researchers aim to create an innovative, non-invasive approach for rehabilitation of Parkinson's disease patients. In studies of both healthy individuals and those with the disease, the team will use new wireless sensors and a novel imaging method to monitor and record body and brain activity during real-world tasks. This data will be used to develop detailed, large-scale models of activity in the brain's basal ganglia-cortical networks, where Parkinson's disease takes its toll, with the help of newly developed brain-like hardware. Building on recent advances in control theory, the team will take into account both the perceptual and cognitive factors involved in complex, realistic movements. Ultimately, they will create a system that offers realistic sensory feedback to stimulate beneficial neurological changes.
Summaries of the eight EFRI projects on Engineering New Technologies Based on Multicellular and Inter-kingdom Signaling (MIKS) are found on the award announcement Web page.
Summaries of the six EFRI projects on Mind, Machines, and Motor Control (M3C) are found on the award announcement Web page.
Listening in on the brain
Last spring, Tzyy-Ping Jung was all over the news. MIT Tech Review, the Huffington Post and a dozen other outlets and blogs were buzzing about his new headband, capable of reading your thoughts and transferring them to a cell phone.
Imagine, a cell phone you could dial with your mind. One outlet called it "the end of dialing"; another said, "The bar for hands-free technology has officially been raised." Jung, however, just sighs and says they missed the point.
"It's a demonstration of a [brain interface] system that could be applied to daily life. It's not really the end goal," says Jung. "Who needs a phone that dials using brain waves if they can actually dial with their hands?"
Jung is associate director at the Swartz Center for Computational Neuroscience at UC San Diego, where researchers lead a new field called Brain Computer Interface, or BCI. The emerging area is littered with impressive toys and dazzling gadgets, like robots that move with a thought and artificial arms that respond at will, almost like real ones.
|Tzyy-Ping (left) and a group at the National Chiao Tung University in Taiwan have developed headgear and software that monitors brainwaves, collects data and transfers a thought process to a mobile device.|
But while high-tech wizardry makes for fun headlines, UC scientists are poised to make a subtler yet fundamental change to the face of medicine. Using a technology somewhat overlooked for more than a decade, scientists are building a two-way conversation between your brain and the many computers that surround it every day.
Scott Makeig works with Jung as the director of the Swartz Center. For more than 20 years he has studied electroencephalogram (EEG) technology. EEGs, recognizable by their funny skullcaps dotted with electrode sensors, measure the electrical signals emitted by a subject's scalp from the brain beneath. While fast and relatively mobile, over the past decade EEG research has been eclipsed by giant fMRI machines, which use huge magnets to track blood movement within the brain. It's a slower, less direct measure of brain activity, but unlike EEG, which mainly focuses on the outer layers of the brain, it can pierce all the way through.
"EEG has dwindled to a low point in its use in medicine after MRI came out," Makeig says. "And it was more or less ignored in neurophysiology."
But hold your pity for poor EEG. In the meantime, scientists have been refining the bulky caps to the point where some take up less room than a pair of headphones. Jung has partnered with his alma mater, National Chiao Tung University in Taiwan, to develop headpieces that collect phenomenal amounts of data in a fraction of a second and broadcast it to a laptop or cell phone. Whereas previous EEG caps required gels to be smeared on a user's scalp, today's sleeker "dry" electrodes are so advanced that several companies have even created brain-operated children's toys.
But the skullcap is just half of the brain-sensing equation; you also need to know what all that data means.
"If someone records data from the scalp they immediately realize how messy it is," Jung says. "It's very noisy."
This is the so-called "cocktail party problem" — EEG brain recordings are like noisy gatherings, where dozens of conversations blend with background noises into confusing slurry. Separating which signals are related to a given thought process is daunting.
In the mid-'90s, Makeig and Jung, plus Terry Sejnowski and Anthony Bell at the Salk Institute, pushed through this problem by teasing apart the EEG signals using a clever analysis borrowed from French theoreticians. Before long, they were able to discriminate specific brain area sources within the crowded and overlapping brainwave and EEG signals coming from working brains.
This, along with a great deal of other work around the world, has opened the way for scientists to now link computers directly to commands from the brain. Although EEGs cannot pierce deep into the brain, the outermost layers — the brain's cortex — generally are where what we call higher reasoning occurs, making it ideal for operating machines.
Naturally, scientists are aiming to build devices to help people with disabilities who are unable to operate wheelchair, computers and phones. But Makeig says brain interfaces have a much broader potential if used the other way — eavesdropping rather than taking commands. For instance, Makeig and Jung have done research into alertness monitoring for the military. He says soon we may be able to give simple headbands to air traffic controllers to alert them when they are nodding off.
Valuable for patient care
William Mobley, a UC San Diego neurologist who has worked on degenerative neurological disorders and Down syndrome, goes even further. He and Jung head up the Center for Advanced Neurological Engineering, which aspires to create a suit that could relay all kinds of information about a patient.
"We envision a time very soon in which a patient's vital signs, EEG, EKG and movements can be recorded 24/7 and sent wirelessly to a remote location for review by a physician," said Mobley. "The suit might well be deployed to allow neurologists a much more complete assessment of patients with a variety of disorders, in the process collecting many thousands of times as much data as is currently the case."
This is not science fiction. The most sophisticated EEG devices (which cover the head with a bulky cap) can parse out underlying brain signals from the admixture of data recorded from up to 256 places on the scalp. However, with today's gadgets you don't need that kind of precision. With just a dozen channels or so Jung and Makeig can easily detect something as simple as a drowsy air traffic controller.
Tuning in on emotions
With more channels, Makeig also can get a pretty good sense of emotion. He says that a simple EEG device could someday become another tool for psychiatrists to give them a clue into the inner world of their patients. To demonstrate the technology, Makeig and graduate student Tim Mullen last year put on an unusual quartet. Makeig was on the violin and two other researchers took the cello and clarinet while Mullen played, well, his brain. (See photo at the top.) He began before the concert, playing musical notes and carefully cultivating the emotions they inspired in his own mind.
"On the night of the performance, I can sit down and reimagine that state — the state that was evoked by a particular note," Mullen says. "And when I imagine that particular emotion my brain dynamics will be recreated again and the machine detects it and it plays that note that originally evoked that emotion in me."
The resulting call and response performance, like the brain dialing, is a stunning demonstration of the underlying potential of EEG-related brain interface. Can we expect a first chair EEG-ist next year at the Metropolitan Opera? No, probably not, but Makeig and Jung say that the important lesson is that scientists can now reliably track specific emotions as well as thoughts.
This, the researchers agree, is how BCI will actually integrate into our lives, as it still lags behind fingers for dialing numbers and surfing the Internet. By using the interface to listen in on the mind, scientists can make tools to reshape medicine, along with the clever toys and fodder for the occasional headline.
ABC News, KPBS, and UCSD-TV have all recently featured new brain-computer interface (BCI) technology developed by Dr. Tzyy-Ping Jung and associates Yu-Te Wang and Yijun Wang of the Institute for Neural Computation. This technology represents a unique and fast-advancing generation of mobile, wireless brain activity monitoring systems. An immediately promising application, whose feasibility was first demonstrated by UCSD researchers Jung and Makeig in the 1990's, is to monitor the alertness of workers in a variety of occupations that require around the clock alertness, such as air traffic controllers, drivers and pilots, and nuclear power plant monitors.
Jung and collaborators Chin-Teng Lin, Jin-Chern Chiou, and associates at National Chiao-Tung University in Hinschu, Taiwan have developed a mobile, wireless, and wearable electroencephalographic (EEG) headband system that contains dry scalp sensors that monitor the wearer's brain waves via signals transmitted through a Bluetooth link that can be read by many cell phones and other mobile devices. The system can continuously monitor the wearer's level of alertness and cue appropriate feedback (for example, audible warning signals or other system alerts) to assist a drowsy worker in maintaining system performance.
Jung, a biomedical engineer, research scientist and Associate Director of the Swartz Center for Computational Neuroscience (SCCN) in the Institute for Neural Computation, UCSD, says that the technology is almost production ready. "We're trying to translate the technology from laboratory experiments to the real world, step by step."
The same dry electrode technology has also been used to detect brain activity in response to visual stimuli flickering at specific frequencies, enabling hands-free dialing of a cell phone. Using such a system, a severely handicapped person could summon emergency aid simply by focusing on the numbers of a keypad. This and similar "smart" prosthetics that respond to direct brain-signal commands may soon offer many new opportunities to disabled persons.
As former UCSD Vice Chancellor for Research Art Ellis stated, "Universities are finding that interdisciplinary research and international teamwork significantly increase our ability to translate the discoveries in our laboratories into results that benefit society."
The Swartz Center for Neural Computation, directed by Scott Makeig, was founded in 2001 by a generous gift from founding donor Dr. Jerome Swartz of The Swartz Foundation (Old Field, New York). The center is currently also funded by grants from the Office of Naval Research, the Army Research Laboratory, The Army Research Office, DARPA, and the National Institutes of Health. Dr. Jung's research is also supported in part by a gift from Abraxis Bioscience Inc.
Media Contact: Paul K. Mueller, 858.534.8564
A new class of brain-computer interface technology could not only let you control devices and play games with your thoughts, but also help detect fatigue in air traffic controllers and other workers in high-stakes positions.
Researchers at the Swartz Center for Computational Neuroscience at the University of California, San Diego, have made it possible to place a cellphone call by just thinking about the number. They say the technology could also tell whether a person is actively thinking, or nodding off.
Tzzy-Ping Jung, a neuroscience researcher and associate director of the center, said the system uses brainwave sensors (or Electroencephalogram (EEG) electrodes) attached to a headband to measure a person's brain activity. The brain signals are then transferred to a cellphone through a Bluetooth device connected to the headband.
Applications Could Provide Hands-Free Dialing, Help for People with Disabilities
In the lab, he said, test subjects sit in front of a screen displaying 10 digits, each flashing at a different rate. The number 1, for example, may flash nine times per second, while the number 2 flashes at a slightly higher frequency.
As participants view each number, the corresponding frequency is reflected in the visual cortex in their brains, he said. That activity is picked up by the sensors, relayed through the wireless Bluetooth device and then used to dial numbers on the cell phone.
Assuming all goes according to plan, if you place the headband on your head, sit at the screen, and then view the digits 1-2-0-2-4-5-6-1-4-1-4, your thoughts alone should lead you to the White House switchboard.
Jung said that results vary from person to person, but many people can reach 90 or even 100 percent accuracy.
"Probably I was the worst subject. I think I reached 85 percent," he said.
For now, the technology is just in the developmental phase. But Jung, who has been studying neurological engineering since 1993, said, "We're trying to move from the lab to the real world, step by step."
In time, applications could potentially give consumers a hands-free way to use their cell phones or people with disabilities a new way to interact with the world. But, Jung said, more passive uses of the technology could already be used to detect fatigue or lapses in attention in people who work in fields where concentration is essential.
Brain-Computer Tech Could Alert Workers When Attention Drops
"In the past, all these brain-computer interfaces have targeted a very small fraction of the patient population," he said. "But [people in] the general, healthy population actually suffer, from time to time, from mental fatigue. …Attention deficit can lead to catastrophic consequences."
Those consequences have been especially visible in recent months, as air traffic controllers have been found sleeping on the job at airports across the country. This week, an FAA official resigned the day after reports of yet another drowsy air traffic controller.
Jung said the same brainwave sensors that enable thought-controlled dialing could be used for cognitive monitoring.
Air traffic controllers, truck drivers, members of the military and anyone else whose lapse in concentration could put lives at risk could strap on a headband (or helmet) and be alerted when their brain activity indicates a drop in attention or alertness. They might hear a warning signal, or get a tactile alert, Jung said.
Technology More Ready Than Consumers
But he said that while the technology is almost ready, people might not be ready to accept it.
"One of the difficulties is people don't want to be watched," he said. "It's sort of like Big Brother watching you all the time."
He also said that he and his team are continuing to refine their technology to tease apart various internal and external factors, like a person's medication or outside power lines, that can generate electronic "noise" and make it more difficult to discern important signals.
Still, given the positive implications, he said, major organizations are interested in the research. His university has contracts with the Army, Navy and DARPA to study how brain-computer interfaces could help soldiers, he said.
And Jung and his team are not the only ones interested in blending the worlds of computing and neuroscience.
NeuroSky, a San Jose, Calif.-based company, already sells a wireless EEG headset that it says can be used for education and gaming.
The MindWave headset measures brainwave impulses from a person's forehead and can be used to gauge student attention levels during lessons, monitor daily mediation and play games that depend on a user's emotional control.
Tansy Brook, the head of communications for the company, said applications for people who work in hazardous work environments, such as air traffic controllers or construction workers, could be realized in the next five years.
"There's a general awareness you want people to have in those situations, they need to be paying attention every single second," she said. "There is amazing potential."
Above: A student tests a new brain-wave cell phone app.
Credit: UCSD Photo
Listen to the audio of the interview...
In very simple terms, it works like this:
First, the user puts on a wireless headband or hat embedded with electrodes that read brain activity.
Next, the caller looks at a series of numbers that flicker at different rates on a computer screen. When focused upon, each number causes a slightly different brain wave pattern
The cell phone decodes the brain waves associated with those numbers and places the call.
Neuroscience researcher Tzyy-Ping Jung, Ph.D. and his colleagues at the Swartz Center for Computational Neuroscience at the University of San Diego developed the system.
"It can bypass conventional motor output path and provide a direct path ofcommunication from the brain to an external device," said Jung.
The cell program is a type of Brain-Computer Interface (BCI) system which is a rapidly expanding scientific field where researchers are finding ways to use thought patterns to command computers and mechanical devices like artificial limbs.
The cell-phone technology could be beneficial to quadriplegics, or those with other severe physical disabilities.
Jung said because the cell-phone based BCI use dry electrodes, miniature electronic circuits and wireless telemetry, it is easier and more comfortable to use than most BCI systems.
“In less than a minute you’re connected and you can do a lot, like experiments, or you can control things, or do video games with just your brain activity,” explained Jung.
In various trial groups, the cell-phone users were about 95 percent accurate in dialing a 10-digit phone number.
Jung said the cell-phone application could be on the market within the next few years.
1. Technology Review
2. Asian American
3. Huffington Post
The UCSD TV series on UCSD's 50th Anniversary year put together an episode on brain-computer interface research at SCCN and INC.
INC Co-Director and Salk Institute professor Terrence J. Sejnowski, Ph.D., has been elected to the National Academy of Engineering. This places him in a remarkably elite group of only ten living scientists to have been elected to the National Academy of Sciences, Institute of Medicine as well as the National Academy of Engineering. UCSD and INC congratulate Dr. Sejnowski on this prestigious appontment and exceptional achievement.
Lost in thoughts: Neural markers of low alertness during mind wandering.
The February issue of Neuro Image: A Journal of Brain Function will feature an article by INC researcher Arnaud Delorme and his student Claire Braboszcz.
Gert Cauwenberghs takes on leadership roles in biomedical circuits and systems for IEEE
Gert Cauwenberghs, Co-Director of INC, takes on multiple leadership roles for IEEE in 2011. Gert is the newly named Editor-in-Chief of IEEE Transactions on Biomedical Circuits and Systems. In addition to his role in primary publications in the field, he will Chair two upcoming conferences, General Chair of the IEEE Biomedical Circuits and Systems Conference in San Diego in 2011, and Technical Chair of the IEEE Engineering in Medicine and Biology Conference in 2012.
Gert Cauwenberghs and Te-Won Lee selected as IEEE Fellows for 2011
INC Co-Director Gert Cauwenberghs and affiliated researcher Te-Won Lee have been selected by the Institute of Electrical and Electronics Engineers (IEEE) as fellows for 2011. Te-Won has perfected independent component analysis algorithms and systems for auditory scene analysis and acoustic source separation, for hands-free telecommunication in cars and mobile environments.
Gert's development of biosensors with student Mike Chi is being recognized by IEEE as well as being featured in MIT Technology Reviews. The biosensor allows longterm monitoring with greater ease of use and increased comfort. The low cost sensor can be mass produced and used outside the hospital environment greatly expanding the potential for conditions which may not manifest in the time period of normal hospital observations. The capacitave sensor is particularly distinctive for making the use of existing technology cost effective through the use of widelt available components and novel circuitry.
NSF funds "An International Social Network for Early Childhood Education"
INC researcher Javier Movellan has been awarded a grant of $749,998 by the National Science Foundation to support development of RubiNet, a social network for early childhood education. The project will develop resources for early childhood education at national and international levels , bringing children, teachers, parents, and researchers together. A unique feature of the project is the use of low-cost, sociable robots as network interfaces. In addition to supporting education and data gathering, the robots will allow children to exchange objects across international boundaries using the robots as intermediaries. This significant difference from other computer interfaces will also allow children in the United States to look around a classroom in Japan, find their friends, and initiate a hug using the robot's child-safe arms.
Dr. Sejnowski discusses TDLC with PNAS.
i-RICE brings Taiwanese scholars to INC
In collaobration with the Institute for Engineering Medicine, INC will host students and post docs to to work wth INC and IEM researcher. Txxy-Ping Jung, of the Center for Advanced Neural Engineering (CANE) participated in the development of the proposal recently approved by Taiwan's National Science council. The students and researchers visiting from Taiwan will be participation in an International Center in Advanced Bioengineering Research.
Gallery: Let Your Children Play With Robots
By Tim Carmody October 26, 2010 | Categories: R&D and Inventions
Salk neuropsychologist Inna Fishman explains some of her current work to Psychology Today.
The Brain's Language Processing in Williams Syndrome and Autism
See article here:
Howard Poizner (PI, UCSD), and co-PI's Gary Lunch (UC Irvine) and Terry Sejnowski (Salk and UCSD), together with team leaders Hal Pashler, Sergei Gepshtein, Deborah Harrington, Tom Liu, Eric Hlagren, and Ralph Greenspan were recently awarded a $4.5M ONR MURI grant, with a $3M option period, to study the brain bases of unsupervised learning and training. (October 1, 2009)
The study, “How Unsupervised Learning Impacts Training: From Brain to Behavior”, involves the following:
Principal Investigator: Howard Poizner
Co-PI’s: Gary Lunch (UC Irvine) and Terry Sejnowski (Salk and UCSD)
Agency: ONR (Office of Naval Research)
Funding: $4.5M (3yr base period) [started Oct 1, 2009]; $3.0M (2yr option period); $7.5M (5 year total period)
The goal of this multidisciplinary grant is to examine the neurobiological, genetic, brain dynamic, and neural circuit correlates of unsupervised learning and training. The proposed studies utilize the new capabilities for creating 3D immersive environments and simultaneous EEG-fMRI recordings recently established through ONR-DURIP grant # N000140811114 (H. Poizner, PI).
The cerebral cortex is able to create rich representations of the world that are much more than just reinforcement learning and reflexes. Learning is often self-supervised without feedback, a type of learning referred to as unsupervised learning. Such learning, and memory, is (i) commonplace in naturalistic settings, (ii) critical to humans, (iii) encoded by LTP-type mechanisms, and (iv) of direct relevance to computational theories of learning. Using unsupervised learning, an individual builds up internal hierarchical structures and categorizations that model the statistical properties of the environment. These internal representations can be used flexibly and powerfully to acquire new information thereby creating situational awareness and readiness to act in novel as well as in familiar environments. Yet, unsupervised learning and its neurobiological mechanisms are poorly understood. Our proposed projects will provide new understanding of the neurobiological, genetic, brain dynamic, and neural circuit correlates of this potentially powerful form of learning and training. We propose seven tasks that attack different aspects of the problem making use of parallel paradigms in rodents, flies, and humans. Task 1 maps memory during spatial learning in rats, seeking to uncover the neural engram of memory. Task 2 uses computational modeling to illuminate cortical processes of unsupervised learning in humans. Task 3 conducts studies of training, contrasting the rate and efficiency of both unsupervised and supervised learning. Task 4 explores the brain dynamics of unsupervised learning, using motion capture and virtual environments while recording cortical EEG. Tasks 5 and 6 investigate neuroimaging and genetic correlates of unsupervised learning bringing to bear the new methodology of simultaneous EEG-fMRI recording and using intracranial recordings. Finally, Task 7 exploits the genetic cellular, and behavioral homologies of the fruit fly with humans to study the dopaminergic and genetic regulation of inter-regional coherence associated with learning.
These studies should provide insight into design of the best training environments for our modern military, and increase our understanding of the underlying neurobiological, genetic, brain dynamic, and neural circuit correlates of those environments. Moreover, the studies will open the way to asking if memory enhancing drugs such as ampakines or if particular learning regimens (e.g., extensive experience with diverse environments, short vs. long sessions) change the number and/or distribution of learning-related synaptic modifications and/or the nature of the neural networks and brain dynamics that underlie unsupervised learning. This issue is fundamental to development of mechanism-based strategies for improving learning and performance in complex environments. Finally, the genetic studies will pave the way for development of individualized training techniques that optimize learning environments.
Powering CANE will be the synergism unleashed by bringing together scientists, engineers, and clinicians in the UCSD Health Sciences, Jacobs School of Engineering, Division of Biological Sciences, and other Units in UCSD, as well as the Salk Institute, other neighboring research institutes, and industrial partners. These scientists already have a strong track record of interdisciplinary collaboration in neuroscience, engineering, computation, and clinical translation, and CANE will encourage further research and development collaborations.
The Center will develop and utilize a wide spectrum of innovative methods in brain and body imaging and will apply powerful mathematical and data mining approaches to the resultant information -- a combination to pave the way for translating advances in neuroscience into enhancements in health care environments, whether clinical, workplace or home-based.
Of importance as well will be CANE’s training of next-generation scientists, engineers, and physicians. Early-stage researchers will get assistance in entry into the research environment and affiliated laboratories will get help in recruiting researchers.
More CANE info ...
A research team of neuroscientists, cognitive scientists and engineers at the University of California, San Diego will play a leading role in a five-year, $25-million Army Research Laboratory (ARL) project to better understand human-systems interactions.
See article here: http://ucsdnews.ucsd.edu/newsrel/awards/07-07Human-Systems.asp
"Students, Meet Your New Teacher, Mr. Robot"
See article here:
Yu Mike Chi, graduate student in the Cauwenberghs laboratory in the Department of Bioengineering and the Institute for Neural Computation, led a team of eight students in the UC San Diego Jacobs School of Engineering, UC San Diego Rady School of Business, and the Salk Institute, to win the top prize in the UC San Diego $80K Entrepreneurship Challenge.
See article here:
This retreat is sponsored by the NIH Cognitive Neuroscience Training Program of the Institute for Neural Computation and follows in the tradition of the retreats sponsored by the McDonnell-Pew Center for Cognitive Neuroscience
See details here:
La Jolla, CA - Salk Institute professor Terrence J. Sejnowski, Ph.D., whose work on neural networks helped spark the neural networks revolution in computing in the 1980s, has been elected a member of the National Academy of Sciences. The Academy made the announcement today during its 147th annual meeting in Washington, DC. Election to the Academy recognizes distinguished and continuing achievements in original research, and is considered one of the highest honors accorded a U.S. scientist.
See details here:
Promoting multi-level approaches to the neural bases of cognition.
See details here:
See Videos here:
Hi Lite Reel:
A brief publication listing news and current INC events.
Link to pdf file here: