2014 may be less than 48 hours old, but we may already have the first big tech buzzword of the year: Neuromorphics, which refers to a new class of “brainlike computers.” As outlined by John Markoff in a front-page story for the New York Times on Dec. 29, neuromorphic computing could “turn the digital world on its head” as computers finally become as smart as humans. In short, 2014 could be the year that engineers and artificial intelligence researchers start to create computers that think and act like humans because their “brains” have been engineered and designed to resemble those of humans.
As a scientific term, of course, neuromorphic computing is not exactly brand new. As Markoff notes, credit for the term “neuromorphic processor” actually dates back to physicist Carver Mead, who came up with the term at Cal Tech during the late 1980s. And a related term to describe computers functioning like human brains – “neural networks” – also has a relatively long pedigree, dating back to the 1950s, when the first modern computers were being created.
So why is “neuromorphics” making a comeback now?
One answer, quite simply, is that neuromorphic computing is finally ready to move from the academic ivory tower to the corporate boardroom. For example, companies such as IBM and Qualcomm are on the cusp of offering new “brainlike computers” to the consumer market. Near the end of the article is a mention that Qualcomm is coming out with a “commercial version” of a neuromorphic processor. From a marketing perspective, the idea of a human brain powering your computer is genius. Just as Intel used a simple slogan, “Intel Inside,” to let you know what was running your computer on the inside, just imagine the marketing benefits of slapping a slogan like, “Human Brain Inside,” on any computer being shipped to consumers.
Make no mistake about it, a “Human Brain Inside” computer would represent a number of revolutionary breakthroughs for what computers can accomplish. Imagine computers that learn from their own mistakes, computers that spot and recognize errors before they ever occur, computers that can walk and drive, and computers that can “see, speak, listen, navigate, manipulate and control.” In short, anything a human can do, a computer will be able to do, all without the need for humans to program them to do it.
It remains to be seen, however, if other tech companies will go along with a buzzword like “neuromorphics” as a catch-all term to describe their futuristic new computing projects. There are, after all, a dizzying number of computing terms that describe nearly the same thing, all of which are mentioned in the New York Times article: biological computing, neural networks, cognitive computing, machine learning, artificial intelligence and “brainlike computer.” If you’re a marketer at a top tech company, do you really want to educate your consumers about something that sounds as intimidating as “neuromorphics?” From a marketing perspective, it’s easier to get a concept off the ground in the tech world if you go with something relatively easy to grasp like “the cloud” or “4G.” So it’s easy to see how neuromorphics might be replaced by another term soon. From that perspective, “neuromorphics” is just another recycled fad from the 1980s.
However, consider how much progress has been made recently in understanding how the human brain works and then designing computing processes to mimic these human brain processes. You had the Google cat experiment, in which Google trained a computer to recognize cats by watching 10 million YouTube clips, and the IBM supercomputer simulation of nearly 5 percent of the human brain. There was also the funding of a brand-new brain and machine research learning center at MIT and the launch of a new Stanford MOOC on machine learning that attracted over 750 students. There’s something about the human brain that’s new and relevant and absolutely on-trend.
In April 2013, when President Obama announced the $100 million BRAIN Initiative to revolutionize our knowledge of how the human brain works, it was touted as a major breakthrough for science. That may end up being the case if it actually ends up creating a new class of “brainlike computers” that are better, smarter and faster than human brains.