It’s no wonder, then, that a group of Silicon Valley luminaries — including Elon Musk, Peter Thiel and Reid Hoffman — have lined up to contribute $1 billion to a new open-source AI project known as OpenAI that is led by Ilya Sutskever, one of the world’s top experts in machine learning. If you can open-source software and hardware, then why not open-source artificial intelligence, right?
1. What exactly are they going to do with that $1 billion?
For now, we don’t really know. The OpenAI website is basically just a single blog post outlining the organization’s manifesto and an “About” page detailing all the technologists and engineers working on the project. Thus far, we only have a long announcement from the founding members that they are going to do something amazing. Throw in some of the biggest tech names in the AI industry and the magic $1 billon number, though, and it’s easy to see why it has a lot of buzz.
The basic idea is that OpenAI, which will be structured as a nonprofit research company, will work on AI innovations that benefit humanity: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” But even the founders admit that only a “tiny fraction” of the $1 billion is going to be spent in the next few years.
2. Does OpenAI mark a change of heart by Elon Musk about the perils of AI?
In October 2014, Elon Musk famously remarked that “we are summoning the demon” by working on human-level AI. What he appears to be doing here with OpenAI is assuring that the demon doesn’t get unleashed — if you have a massive open-source AI community, then the odds of something going wrong are muted. Knowledge in the right hands — and freed of the profit directive — will produce the right types of AI innovations.
It’s not so much that Elon Musk is against AI — he is a backer of DeepMind, after all — it’s that he recognizes the perils of AI if it’s in the wrong hands. One big concern is the ability to make autonomous weapons that could attack targets without human intervention – the whole “Terminator” Skynet scenario.
3. So will OpenAI stop people from making autonomous weapons?
That’s the goal. In July 2015, Musk co-signed an open letter opposing the use of AI in modern weaponry to prevent “the Kalashnikovs of tomorrow.”
However, where there’s a will, there’s a way. Think about the world’s first 3D-printed gun, made by Defense Distributed. You can think of this as a form of nonprofit, open-source sharing gone haywire: Defense Distributed is giving away the designs and plans for the 3D-printed gun, telling you exactly how to make it. What’s to stop someone from doing the same thing for autonomous weapons?
4. Okay, will OpenAI at least stop the world from developing a malevolent superintelligence?
According to the project’s founders, the odds are a lot better now: “Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.”
But remember the Heartbleed virus in 2014 that wreaked havoc on the world’s computer systems? That was due to a bug in open-source software. A programming blunder opened up a huge security loophole. That’s the risk of any open-source project — it may be a little bit buggy — and that might just lead to the apocalyptic AI scenario everyone is now worried about.
5. What does OpenAI mean for the future of closed-source AI?
For now, the world’s leading tech companies are going to continue creating their own form of AI innovations, driven by the profit motive. Nearly all the biggest companies in Silicon Valley have waded into the AI field, in one way or another. Uber, for example, recently poached a team of 40 AI researchers at Carnegie Mellon University to build a self-driving car, and both Facebook and Google are heavily investing in AI.
Ultimately, there’s no guarantee that OpenAI will actually succeed, even with $1 billion in bank commitments. The open source movement has its share of failures as well as its successes. There’s nothing stopping a similar group of innovators from starting a separate, competing project, something along the lines of the Allen Institute for Artificial Intelligence. And if nonprofit researchers see their peers in the corporate sector raking in the big bucks, it may be difficult to resist the siren call of creating AI innovations for financial gain.