Menu Close
Here’s looking at you kid. PA/Peter Byrne

Japanese supercomputer takes big byte out of the brain

Researchers in Japan have used the powerful K computer, one of the world’s fastest supercomputers, to simulate the complex neural structure of our brain.

Using a popular suite of neuron simulating software called NEST, the K computer is able to pull together the power of 82,944 processors to create a network simulating 1.73 billion nerve cells connected by 10.4 trillion synapses - approximating about 1% of the raw processing power of a human brain.

Just how much of a leap forward is this technological feat in progressing our understanding of our brilliant biological brains?

Flicking the switches

Our brains are complicated things, at the heart of our current understanding of how they work is the idea that billions of specialised nerve cells, called neurons, connect together and pass signals to give rise to the activities of thought, sensing and action.

At one level a neuron can be considered to be a fairly simple biological switch. It absorbs the signals coming in, and if that signal is strong enough, the neuron fires it to the other neurons it’s connected to. This sort of processing can be implemented on electronic hardware too. That’s where the K computer comes in.

The project is the result of work by researchers from the Riken HPCI Programme for Computational Life Sciences at the Okinawa Institute of Technology Graduate University in Japan and Germany’s Jülich Institute of Neuroscience and Medicine.

In terms of speed, our brain neurons are actually quite slow at flicking the biological switches. They work at a rate of milliseconds (they need time to recover after firing) and combine in exquisite patterns. But because of the sheer number of them, there’s still a lot of computing going on per second.

Electronic devices can switch much faster. The K computer used about one petabyte of memory, the equivalent of combining 250,000 home computers. But the simulation still took 40 minutes to provide the computational power of one second of neuronal network activity of the brain in real, biological, time.

Show and tell

Brain researchers like me are frankly impressed by these big numbers and computing power - they are the bedrock of being able to move towards understanding the actions of massive clusters of brain neurons. However, there’s more to understanding than simply building.

The research team freely admit this work was about showing what could be done with today’s technology - their simulations as of yet don’t actually address any significant questions about how our brains think. It’s a bit like building a super-connected motorway network, populated with simulated cars, but not yet looking at how that road network reacts to the holiday road rush.

There’s no doubt that such giant scale simulations will soon yield answers to mysteries about how our brains operate, how we learn, how we perceive and perhaps even how we feel. But any simulation is only as good as the assumptions it makes in building the software.

Even with the open source NEST software, if you look in detail there are a range of parameters in the simulation that need to be set, and changing these parameters can often significantly change what you get out of the simulation.

The simple switching metaphor is also only an approximation; we’re finding biological neurons to be far more complex in the way they process their signals. And then there are the interesting and not yet particularly well understood effects of wide-scale neuro-modulators like nitric oxide, a compound present in the brain that has been implicated in changing neuron’s local excitability and firing properties.

Add to the mix the increasing evidence that glia cells (which don’t carry signals like neurons but make up around ten to 50 times more of the brain mass than neurons) aren’t just there to provide physical support for the neurons but are an integral part of the brain’s function, and the computational problem gets even bigger.

Exquisite architecture

Perhaps most important, our brains have an exquisite architecture, from the distinct layers in the cortex to the orientation columns in the early visual areas. Throughout its structure, particular types of signals travel through different configurations of neurons and the form dictates function.

The current K computer simulation impressively joins up lots of neurons to lots of other neurons, leveraging raw computing power, but in the future we’ll need to better understand how the brain uses functional and architectural specialisation to accomplish its work.

To fully model and understand our brain - in particular to be able to explain things we already know about brain function and predict new properties that we can then test - we’ll need to bring together the skills and knowledge of other research disciplines such as neuroscience and computer science, in the same way that the Japanese-led simulation pulls together the power of myriad computer processors.

And with Japan ready to develop the next generation supercomputer by 2020 - 100 times faster than the K computer - and other countries set to compete or go further, there’s certainly much more to come.

Want to write?

Write an article and join a growing community of more than 180,900 academics and researchers from 4,919 institutions.

Register now