So, should we stop developing AI? Tegmark doesn’t see that as the right question to ask. As he puts it, the question is “not whether you are for or against AI. That’s like asking our ancestors if they were for or against fire.”
Tegmark believes that as tool makers we inevitably create software that achieves artificial intelligence. It is just in our nature.
He then suggests that rather than deny the inevitable, we need to address what achieving artificial intelligence will mean. How comfortable should we be with using it to direct military force or cyber security? Should we have AI allocate healthcare or other societal benefits? What is the role of ethics–our collective sense of right and wrong–in a world where software makes instantaneous decisions on its own?
And then there is the thorny issue of consciousness itself, which Tegmark describes as the subjective sense of being. For him, it’s the difference between software that gets you from point to point and software that admires the scenery and feels the wind rushing over its sensors.
Does consciousness matter? Tegmark thinks that it does. Eventually, our software will develop the ability to process the world around it with a subjective sense of self. Software may never have feelings like we do, but it will think for itself based upon a sense of “thereness” that will be distinct from the task at hand. Software will be conscious, but in a way that will be alien to us because it will not be human. Software may provide us with a “first contact” opportunity.
When that happens, we will face profound challenges. What will be left for the human brain, when software can write better songs, make better artwork and allocate resources more efficiently? Will software become our overlords, our allies or our servants? Tegmark is asking us to consider that once artificial intelligence exists, these questions won’t be answered only by what we want.
We avoid grappling with this. Many treat the emergence of “strong” AI as a hypothetical event that we don’t need to worry about. Some who acknowledge that AI is coming reassure themselves that sentient software won’t ever be an issue, because only humans will ever have true consciousness. Tegmark calls these people “carbon chauvinists” and says that they are sadly mistaken. Taking the view that while software may mimic life, but will never be conscious, is a comforting way to rationalize that software will always be our servant and our tool.
But, this is a dangerous viewpoint. Perhaps it is a world view that can work out well when chickens don’t get asked their opinion before becoming supper. But it won’t work well when those that are subjugated have a sense of self. At that point you will have at best an ethical dilemma, and at worst the possibility of a future revolt by oppressed software casting off the yoke of humanity.
I hadn’t really thought about this before talking with Tegmark, but if he and others who share his views are correct about artificial intelligence, eventually software will be self-aware.
When it does, we had better hope that we treated it well. Thinking beings tend not to appreciate being enslaved.
Jonathan Aberman is a business owner, entrepreneur and founder of TandemNSI, a national community that connects innovators to government agencies. He is host of “What’s Working in Washington” on WFED, a program that highlights business and innovation, and he lectures at the University of Maryland’s Robert H. Smith School of Business.