The human brain is a powerful supercomputer, but it consumes very little power.
The brain is also excellent at processing information efficiently — billions of neurons are deeply connected to memory areas — which gives us the ability to access the data we need to make a decision, quickly make sense of it and then resume normal operation.
That fundamental structure is what sets us apart from machines. It’s the reason we can think and feel and process millions of pieces of data in a fraction of a second every day, without our heads exploding.
Computers don’t work this way.
For decades, they’ve been built to perform calculations in a series of steps, while shuttling data between memory storage areas and processors.
That consumes a lot of power, and while computers are good at crunching huge volumes of information, they’re not so good at recognizing patterns in real time.
With funding from the Defense Advanced Research Projects Agency and partnerships from national laboratories, engineers at International Business Machines created a chip last year that could imitate the structure of the human brain, in the hope that it would lead to a more efficient model of computing.
The result has the potential to transform the way computers are built in the future, according to IBM, while consuming as much power as a hearing-aid battery.
IBM’s long-term goal is to build a “brain in a box” that consumes less than 1 kilowatt of power and yet can quickly identify patterns in large data sets, said Dharmendra Modha, IBM’s chief scientist for brain-inspired computing.
Applications for this technology range from national security to disaster response. That’s why IBM’s team and scientists from Lawrence Livermore, Oak Ridge and other national laboratories took a trip to Capitol Hill last week to demonstrate the technology before lawmakers.
Devices powered by the chip could be used to perform biosecurity checks by sifting through biological samples to identify harmful agents, or power autonomous spacecraft, or monitor computer networks for strange behavior, scientists said.
IBM’s flagship supercomputer, Watson, which is built on today’s computer architecture and consumes large amounts of power, exemplifies linear calculation, Modha said.
In contrast, the chip has the ability to recognize or “sense” its environment in real time, similar to what humans do with eyes and ears.
For instance, the chip has been used to play a game of Pong by “looking” at the ball and moving the paddle to meet it.
But although IBM stresses that the technology could eventually be used for consumer products such as glasses for the visually impaired, those applications are still a long way off.
The timing of the company’s trip to Washington also coincides with the reintroduction of a bill in the House last week — the 2013 American Super Computing Leadership Act — which directs the Energy Department to explore the development of low-power supercomputers. The bill passed the House in 2014 but stalled in the Senate.
IBM won a $325 million contract with the Energy Department last year to develop — by the year 2017 — two advanced supercomputers, named Summit and Sierra, that could potentially incorporate the new architecture.
One of the challenges engineers face at this point is the ability to scale up, or replicate, different brain sizes and program software to operate them, Modha said.
IBM has worked its way up from a worm-size brain, with 256 processors that simulate neurons, to a chip with 1 million of them — the equivalent of a bee brain. By the end of next year, the team hopes to build a mouse-sized brain with 256 million processor-neurons, he said.
At 100 billion neurons, the human brain remains a distant dream.