Optical fibres light the way for brain-like computing

via www.dpaonthenet.net

via www.dpaonthenet.net

Computers that function like the human brain could soon become a reality thanks to new research using optical fibres made of speciality glass.

The research, published in Advanced Optical Materials, has the potential to allow faster and smarter optical computers capable of learning and evolving.

Researchers from the Optoelectronics Research Centre (ORC) at the University of Southampton, UK, and Centre for Disruptive Photonic Technologies (CDPT) at the Nanyang Technological University (NTU), Singapore, have demonstrated how neural networks and synapses in the brain can be reproduced, with optical pulses as information carriers, using special fibres made from glasses that are sensitive to light, known as chalcogenides.

The project, funded under Singapore’s Agency for Science, Technology and Research (A*STAR) Advanced Optics in Engineering programme, was conducted within The Photonics Institute (TPI), a recently established dual institute between NTU and the ORC.

Co-author Professor Dan Hewak from the ORC, says: “Since the dawn of the computer age, scientists have sought ways to mimic the behaviour of the human brain, replacing neurons and our nervous system with electronic switches and memory. Now instead of electrons, light and optical fibres also show promise in achieving a brain-like computer. The cognitive functionality of central neurons underlies the adaptable nature and information processing capability of our brains.”

In the last decade, neuromorphic computing research has advanced software and electronic hardware that mimic brain functions and signal protocols, aimed at improving the efficiency and adaptability of conventional computers.

However, compared to our biological systems, today’s computers are more than a million times less efficient. Simulating five seconds of brain activity takes 500 seconds and needs 1.4 MW of power, compared to the small number of calories burned by the human brain.

Using conventional fibre drawing techniques, microfibers can be produced from chalcogenide (glasses based on sulphur) that possess a variety of broadband photoinduced effects, which allow the fibres to be switched on and off. This optical switching or light switching light, can be exploited for a variety of next generation computing applications capable of processing vast amounts of data in a much more energy-efficient manner.

Read more: Optical fibres light the way for brain-like computing

 

The Latest on: Brain-like computing

via  Bing News

 

IBM Wants to Invent the Chips of the Future, Not Make Them

Carbon nanotubes (Photo credit: Wikipedia)

IBM plans to spend $3 billion over the next five years on research and development for computer chips, both to stretch the limits of conventional semiconductors and to hasten the commercialization of exotic new designs.

The chip program, announced Wednesday, will be among the largest research initiatives at IBM, which spends about $6 billion a year on research and development. “We’re investing to really push the frontiers” of chip technology, said John E. Kelly, senior vice president and director of research at IBM.

The effort, Mr. Kelley said, will have two main goals. The first will be to wring further improvements from current silicon chip technology, by shrinking the tiny circuits from today’s 22 nanometers down to 7 nanometers, a few atoms wide. The second goal is to accelerate progress on novel and promising, if unproved, approaches — designs that employ quantum physics, carbon nanotubes and chips inspired by the brain, called neuromorphic chips.

To step up its chip research, Mr. Kelly said IBM would be hiring more scientists and investing in industry partnerships and academic collaborations.

Read more . . .

 

The Latest on: Neuromorphic chips

via  Bing News

 

 

Computing with Silicon Neurons: Artificial Nerve Cells Help Classify Data

Computing with Silicon Neurons: Artificial Nerve Cells Help Classify Data The neuromorphic chip containing silicon neurons, which the researchers used for their data-classifying network. Copyright: Kirchhoff Institute for Physics, Heidelberg University

Inspired by nature, scientists from Berlin and Heidelberg use artificial nerve cells to classify different types of data.

A bakery assistant who takes the bread from the shelf just to give it to his boss who then hands it over to the customer? Rather unlikely. Instead, both work at the same time to sell the baked goods.

Similarly, computer programs are more efficient if they process data in parallel rather than to calculate them one after the other. However, most programs that are applied still work in a serial manner.

Scientists from the Freie Universität Berlin, the Bernstein Center Berlin and Heidelberg University have now refined a new technology that is based on parallel data processing. In the so-called neuromophic computing, neurons made of silicon take over the computational work on special computer chips. The neurons are linked together in a similar fashion to the nerve cells in our brain.

If the assembly is fed with data, all silicon neurons work in parallel to solve the problem. The precise nature of their connections determines how the network processes the data. Once properly linked, the neuromorphic network operates almost by itself. The researchers have now designed a network — a neuromorphic “program” — for this chip that solves a fundamental computing problem: It can classify data with different features. It is able to recognize handwritten numbers, or may distinguish certain plant species based on flowering characteristics.

“The design of the network architecture has been inspired by the odor-processing nervous system of insects,” explains Michael Schmuker, lead author of the study. “This system is optimized by nature for a highly parallel processing of the complex chemical world.”

Together with work group leader Martin Nawrot and Thomas Pfeil, Schmuker provided the proof of principle that a neuromorphic chip can solve such a complex task. For their study, the researchers used a chip with silicon neurons, which was developed at the Kirchhoff Institute for Physics of Heidelberg University.

Computer programs that can classify data are employed in various technical devices, such as smart phones. The neuromorphic network chip could also be applied in supercomputers that are built on the model of the human brain to solve very complex tasks.

Read more . . .

 

The Latest on: Artificial Nerve Cells

via  Bing News

 

Neuroelectronics Make Smarter Computer Chips

neuroelectronics-make-smarter-computer-chips_1

Computer chips inspired by human neurons can do more with less power

Kwabena Boahen got his first computer in 1982, when he was a teenager living in Accra. “It was a really cool device,” he recalls. He just had to connect up a cassette player for storage and a television set for a monitor, and he could start writing programs.

But Boahen wasn’t so impressed when he found out how the guts of his computer worked. “I learned how the central processing unit is constantly shuffling data back and forth. And I thought to myself, ‘Man! It really has to work like crazy!’” He instinctively felt that computers needed a little more ‘Africa’ in their design, “something more distributed, more fluid and less rigid”.

Today, as a bioengineer at Stanford University in California, Boahen is among a small band of researchers trying to create this kind of computing by reverse-engineering the brain.

The brain is remarkably energy efficient and can carry out computations that challenge the world’s largest supercomputers, even though it relies on decidedly imperfect components: neurons that are a slow, variable, organic mess. Comprehending language, conducting abstract reasoning, controlling movement — the brain does all this and more in a package that is smaller than a shoebox, consumes less power than a household light bulb, and contains nothing remotely like a central processor.

To achieve similar feats in silicon, researchers are building systems of non-digital chips that function as much as possible like networks of real neurons. Just a few years ago, Boahen completed a device called Neurogrid that emulates a million neurons — about as many as there are in a honeybee’s brain. And now, after a quarter-century of development, applications for ‘neuromorphic technology’ are finally in sight. The technique holds promise for anything that needs to be small and run on low power, from smartphones and robots to artificial eyes and ears. That prospect has attracted many investigators to the field during the past five years, along with hundreds of millions of dollars in research funding from agencies in both the United States and Europe.

Neuromorphic devices are also providing neuroscientists with a powerful research tool, says Giacomo Indiveri at the Institute of Neuroinformatics (INI) in Zurich, Switzerland. By seeing which models of neural function do or do not work as expected in real physical systems, he says, “you get insight into why the brain is built the way it is”.

And, says Boahen, the neuromorphic approach should help to circumvent a looming limitation to Moore’s law — the longstanding trend of computer-chip manufacturers managing to double the number of transistors they can fit into a given space every two years or so. This relentless shrinkage will soon lead to the creation of silicon circuits so small and tightly packed that they no longer generate clean signals: electrons will leak through the components, making them as messy as neurons. Some researchers are aiming to solve this problem with software fixes, for example by using statistical error-correction techniques similar to those that help the Internet to run smoothly. But ultimately, argues Boahen, the most effective solution is the same one the brain arrived at millions of years ago.

“My goal is a new computing paradigm,” Boahen says, “something that will compute even when the components are too small to be reliable.”

Silicon cells

The neuromorphic idea goes back to the 1980s and Carver Mead: a world-renowned pioneer in microchip design at the California Institute of Technology in Pasadena. He coined the term and was one of the first to emphasize the brain’s huge energy-efficiency advantage. “That’s been the fascination for me,” he says, “how in the heck can the brain do what it does?”

Mead’s strategy for answering that question was to mimic the brain’s low-power processing with ‘sub-threshold’ silicon: circuitry that operates at voltages too small to flip a standard computer bit from a 0 to a 1. At those voltages, there is still a tiny, irregular trickle of electrons running through the transistors — a spontaneous ebb and flow of current that is remarkably similar in size and variability to that carried by ions flowing through a channel in a neuron. With the addition of microscopic capacitors, resistors and other components to control these currents, Mead reasoned, it should be possible to make tiny circuits that exhibit the same electrical behavior as real neurons. They could be linked up in decentralized networks that function much like real neural circuits in the brain, with communication lines running between components rather than through a central processor.

By the 1990s, Mead and his colleagues had shown it was possible to build a realistic silicon neuron (see ‘Biological inspiration’). That device could accept outside electrical input through junctions that performed the role of synapses, the tiny structures through which nerve impulses jump from one neuron to the next. It allowed the incoming signals to build up voltage in the circuit’s interior, much as they do in real neurons. And if the accumulating voltage passed a certain threshold, the silicon neuron ‘fired’, producing a series of voltage spikes that traveled along a wire playing the part of an axon, the neuron’s communication cable. Although the spikes were ‘digital’ in the sense that they were either on or off, the body of the silicon neuron operated — like real neurons — in a non-digital way, meaning that the voltages and currents weren’t restricted to a few discrete values as they are in conventional chips.

That behavior mimics one key to the brain’s low-power usage: just like their biological counterparts, the silicon neurons simply integrated inputs, using very little energy, until they fired. By contrast, a conventional computer needs a constant flow of energy to run an internal clock, whether or not the chips are computing anything.

Mead’s group also demonstrated decentralized neural circuits — most notably in a silicon version of the eye’s retina. That device captured light using a 50-by-50 grid of detectors. When their activity was displayed on a computer screen, these silicon cells showed much the same response as their real counterparts to light, shadow and motion. Like the brain, this device saves energy by sending only the data that matters: most of the cells in the retina don’t fire until the light level changes. This has the effect of highlighting the edges of moving objects, while minimizing the amount of data that has to be transmitted and processed.

Coding challenge

In those early days, researchers had their hands full mastering single-chip devices such as the silicon retina, says Boahen, who joined Mead’s lab in 1990. But by the end of the 1990s, he says, “we wanted to build a brain, and for that we needed large-scale communication”. That was a huge challenge: the standard coding algorithms for chip-to-chip communication had been devised for precisely coordinated digital signals, and wouldn’t work for the more-random spikes created by neuromorphic systems. Only in the 2000s did Boahen and others devise circuitry and algorithms that would work in this messier system, opening the way for a flurry of development in large-scale neuromorphic systems.

Among the first applications were large-scale emulators to give neuroscientists an easy way to test models of brain function. In September 2006, for example, Boahen launched the Neurogrid project: an effort to emulate a million neurons. That is only a tiny chunk of the 86 billion neurons in the human brain, but enough to model several of the densely interconnected columns of neurons thought to form the computational units of the human cortex. Neuroscientists can program Neurogrid to emulate almost any model of the cortex, says Boahen. They can then watch their model run at the same speed as the brain — hundreds to thousands of times faster than a conventional digital simulation. Graduate students and researchers have used it to test theoretical models of neural function for processes such as working memory, decision-making and visual attention.

“In terms of real efficiency, in terms of fidelity to the brain’s neuronal networks, Kwabena’s Neurogrid is well in advance of other large-scale neuromorphic systems,” says Rodney Douglas, co-founder of the INI and co-developer of the silicon neuron.

Read more . . .

 

 

Go deeper with Bing News on:
Neuroelectronics
  • NTT Research to Expand its Silicon Valley Footprint in 2020
    on December 20, 2019 at 5:33 am

    The project leader in Germany is Dr. Bernhard Wolfrum, Professor of Neuroelectronics at TUM in the Department of Electrical and Computer Engineering and the Munich School of BioEngineering (MSB). The ...

  • Researchers unravel how the brain processes visual information
    on December 2, 2019 at 10:27 am

    A team of scientists led by Karl Farrow at NeuroElectronics Research Flanders (NERF, empowered by imec, KU Leuven and VIB) is unraveling how our brain processes visual information. They identified ...

  • Mapping the relay networks of our brain
    on December 2, 2019 at 5:30 am

    A team of scientists led by Karl Farrow at NeuroElectronics Research Flanders (NERF, empowered by imec, KU Leuven and VIB) is unraveling how our brain processes visual information. They identified ...

  • What makes memories stronger?
    on April 29, 2019 at 8:40 am

    A team of scientists at NeuroElectronics Research Flanders (NERF- empowered by imec, KU Leuven and VIB) found that highly demanding and rewarding experiences result in stronger memories. By studying ...

  • Specific type of neuronal feedback plays key role in early recovery of spinal cord injury
    on April 8, 2019 at 11:40 pm

    Aya Takeoka from NERF (NeuroElectronics Research Flanders), an interdisciplinary research center empowered by VIB, KU Leuven and imec. Her lab studies the mechanisms of motor learning and control, ...

Go deeper with Google Headlines on:
Neuroelectronics
Go deeper with Bing News on:
Neuromorphic engineering
Go deeper with Google Headlines on:
Neuromorphic engineering

Neuromorphic computing: The machine of a new soul

Complete_neuron_cell_diagram_en.svg

Computers will help people to understand brains better. And understanding brains will help people to build better computers

ANALOGIES change. Once, it was fashionable to describe the brain as being like the hydraulic systems employed to create pleasing fountains for 17th-century aristocrats’ gardens. As technology moved on, first the telegraph network and then the telephone exchange became the metaphor of choice. Now it is the turn of the computer. But though the brain-as-computer is, indeed, only a metaphor, one group of scientists would like to stand that metaphor on its head. Instead of thinking of brains as being like computers, they wish to make computers more like brains. This way, they believe, humanity will end up not only with a better understanding of how the brain works, but also with better, smarter computers.

These visionaries describe themselves as neuromorphic engineers. Their goal, according to Karlheinz Meier, a physicist at the University of Heidelberg who is one of their leaders, is to design a computer that has some—and preferably all—of three characteristics that brains have and computers do not. These are: low power consumption (human brains use about 20 watts, whereas the supercomputers currently used to try to simulate them need megawatts); fault tolerance (losing just one transistor can wreck a microprocessor, but brains lose neurons all the time); and a lack of need to be programmed (brains learn and change spontaneously as they interact with the world, instead of following the fixed paths and branches of a predetermined algorithm).

To achieve these goals, however, neuromorphic engineers will have to make the computer-brain analogy real. And since no one knows how brains actually work, they may have to solve that problem for themselves, as well. This means filling in the gaps in neuroscientists’ understanding of the organ. In particular, it means building artificial brain cells and connecting them up in various ways, to try to mimic what happens naturally in the brain.

Analogous analogues

The yawning gap in neuroscientists’ understanding of their topic is in the intermediate scale of the brain’s anatomy. Science has a passable knowledge of how individual nerve cells, known as neurons, work. It also knows which visible lobes and ganglia of the brain do what. But how the neurons are organised in these lobes and ganglia remains obscure. Yet this is the level of organisation that does the actual thinking—and is, presumably, the seat of consciousness. That is why mapping and understanding it is to be one of the main objectives of America’s BRAIN initiative, announced with great fanfare by Barack Obama in April. It may be, though, that the only way to understand what the map shows is to model it on computers. It may even be that the models will come first, and thus guide the mappers. Neuromorphic engineering might, in other words, discover the fundamental principles of thinking before neuroscience does.

Two of the most advanced neuromorphic programmes are being conducted under the auspices of the Human Brain Project (HBP), an ambitious attempt by a confederation of European scientific institutions to build a simulacrum of the brain by 2023. The computers under development in these programmes use fundamentally different approaches. One, called SpiNNaker, is being built by Steven Furber of the University of Manchester. SpiNNaker is a digital computer—ie, the sort familiar in the everyday world, which process information as a series of ones and zeros represented by the presence or absence of a voltage. It thus has at its core a network of bespoke microprocessors.

Read more . . .

 

The Latest on: Neuromorphic computing

via Google News and Bing News