The Washington PostDemocracy Dies in Darkness

Opinion AI changes everything. We need new guardrails to survive it. And soon.

OpenAI's CEO, Sam Altman, recently promised to open the company's research to independent auditing. (Gabby Jones/Bloomberg)
5 min

Darren Walker is president of the Ford Foundation and author “From Generosity to Justice: A New Gospel of Wealth.” Hemant Taneja is CEO of General Catalyst and the author of “Intended Consequences: How to Build Market-Leading Companies with Responsible Innovation.

The artificial intelligence revolution has arrived. One of us is a venture capitalist, the other a philanthropist, and we see leaders in every field placing bets, by the billions, on what comes next.

That makes this a perilous moment. Machine learning is poised to radically reshape the future of everything for good and for ill, much as the internet did a generation ago.

And yet, the transformation underway likely will make the internet look like a warm-up act. AI has the capacity to scale and spread all our human failings — disregarding civil liberties and perpetuating the racism, caste and inequality endemic to our society.

Machine learning mimics human learning, synthesizing data points and experiences to formulate conclusions. Along the way, these algorithms replicate human error and bias, often in ways not discernible until the consequences are before us: Intolerable cruelty, unjust arrests, and the loss of critical care for millions of Black people, to name a few.

AI trains on our own flawed human data sets — unrestrained by a moral compass, social pressure or legal restrictions. Almost by definition, it ignores fundamental guardrails.

This is a profound test for everyone: the private sector, the public sector and civil society.

Businesses that research and develop AI are sharing a powerful tool with a public that might not be ready to absorb or wield it responsibly. Governments are poorly equipped to regulate this technology in a way that safeguards the people who use it or those who might be dislocated by it. And neither group feels much urgency to understand or work with the other.

All of this has our alarm bells ringing.

The time has come for new rules and tools that provide greater transparency on both the data sets used to train AI systems and the values built into their decision-making calculus. We are also calling for more action to address the economic dislocation that will follow the rapid redefinition of work.

Software developers should commit to continuous monitoring through “algorithmic canaries” — models designed to spot malign content like fake news — and external, independent audits of their algorithms. We are heartened by OpenAI CEO Sam Altman’s recent commitment to open the company’s research to independent auditing — as well as his challenge to the industry to avoid a reckless race to release models as fast as possible without regard to safety processes.

Policymakers and regulators must catch up on protections for privacy, safety and competition. More than half of American workers with AI-related PhDs work for a handful of big-name companies. So, our elected representatives should initiate a whole-of-government effort — across Congress, the departments of Commerce and Labor, the FCC, and the SEC — to build a regulatory framework to match. This would require widely shared technical literacy and expertise — one reason that the Ford Foundation and others are supporting efforts to place technologists across offices on Capitol Hill, one element of the gathering movement for public-interest technology.

In addition to government oversight, the venture capital and start-up community must evolve — and quickly. Investors cannot count on others to allay unintended consequences. We must pursue intended consequences from the start, which means in-depth investigations, scenario planning and boundary setting before investment. We must set responsible innovation guidelines, which will standardize how we unlock the possibilities and avoid the pitfalls of this transformational technology.

No one wants capitalism to destroy itself — which is why the private sector must broaden its definition of value to include the interests of all stakeholders, not just shareholders. By retethering wages to rising productivity, firms can ensure the dollars pouring into new technologies flow beyond the wealthy investor and founder class. They can revive models of equitable wealth creation, from employee ownership to profit-sharing — beginning to reduce the concentration of wealth creation among a very small subset of our workforce.

We know that technology will uproot and upend. Manufacturing communities across the country, decimated by offshoring and automation, offer a stark reminder of the stakes. Every company that endeavors to use AI ought to build retraining capabilities for its people, especially given the massive labor shortages that businesses face in complex roles that need highly skilled workers.

Finally, business leaders must stop assuming that they can reap the profits of disruption and then repent through philanthropy. All too often, corporate leaders use the language of philanthropy; corporate social responsibility (CSR); and environment, social, and governance (ESG) to mitigate harm on the back end rather than designing with intentionality from the start.

Artificial intelligence isn’t just another technological breakthrough. If we are to survive this test, everyone must do business differently than in the past.