This content is paid for by an advertiser and published by WP Creative Group. The Washington Post newsroom was not involved in the creation of this content. Learn more about WP Creative Group.
Content from IBM

IBM: The future of AI must be open to ensure safety, accuracy and inclusivity

Transparent and collaborative approaches to development will help make the technology a net benefit to humanity while reducing risks.

A person with blue hair sits at a desk, typing on a keyboard with code displayed on two monitors.

Artificial intelligence is evolving rapidly and promises to benefit businesses, governments and society at large. But it also presents risks. The dangers include deep fakes meant to mislead voters, misuse of intellectual property and errors, or “hallucinations,” that yield inaccurate or even dangerous information.  

So what can be done to ensure that AI’s benefits outweigh its risks? The answer, according to experts who spoke at a recent Washington Post Live event in D.C, is that the development ecosystems around the technology must be open, transparent and collaborative. The thinking is that by involving a range of diverse stakeholders — including the private sector, nonprofits, academia and government research arms – the less likely it is that risks will go unchecked and the more likely it is that innovation will flourish.

Open source-style transparency around the data sources that drive Large Language Models also helps to build trust in their output. “The more bodies you have, the more eyes you have on the code and the more multi-stakeholder approaches you have to AI development, the better and safer and more innovative the technology is going to be,” said Christina Montgomery, vice president and chief privacy and trust officer at IBM. “Blind trust in the case of AI is really dangerous.”

The future is open

As a proof point for the value of openness, Montgomery cites the success of bug bounty programs where developers challenge the public to find software flaws. “We have long-established experiences where transparency helps to drive trust,” said Montgomery.

An open approach to AI development stands in contrast to proprietary, closed models that are used by some high-profile developers of generative AI systems. Backers of such models argue that closed is more secure and less vulnerable to infiltration by bad actors because access to the code is limited. Montgomery countered that AI development that takes a page from open-source software like Linux is actually more secure, as it offers more chances to catch hackers. “We’ve long been a supporter of open source,” Montgomery said.

“When I think about open innovation, I think about the whole innovation ecosystem and the need for openness across the ecosystem.”

– Rebecca Finlay, CEO, Partnership on AI

One organization that promotes an open, collaborative approach to AI is the Partnership for AI, cofounded by IBM in 2016. The group offers resources and tools that programmers and non-programmers alike can use for AI development that is collaborative and transparent. “When I think about open innovation, I think about the whole innovation ecosystem and the need for openness across the ecosystem,” said Rebecca Finlay, CEO at the global nonprofit Partnership on AI. “Transparency is the first step to accountability,” said Finlay, speaking at the WP Live event.

Partnership for AI promotes inclusive design and encourages companies to include individuals from varied backgrounds and disciplines –including the humanities – in their AI development process. “We need to create space for a real diversity of perspectives to come together,” said Finlay, adding that enterprises should “put forward solutions that will protect people and ensure that we’re developing AI for equity, justice and shared prosperity.”          

Including users in the AI development process helps to ensure that the end result meets their needs. “How are we ensuring that teachers who know what they need in the classroom and the healthcare practitioners who know what they need in healthcare settings are working directly with model developers and model deployers to better understand how these systems can work for them?” said Finlay.

Three people collaborate near a whiteboard filled with sticky notes and sections labeled 'Feasibility'. One person points to the board while two others engage in discussion.

Trusted and transparent

Along those lines, Partnership for AI offers four recommendations to help organizational leaders implement inclusive AI:

Another organization that advocates for an open approach to AI is the AI Alliance, co-founded last year by IBM and Meta. The group, which includes more than 100 member organizations from private industry and from universities including Yale, University of California, Berkeley and Notre Dame, contends that openness and transparency will hasten AI’s progress and mitigate risks. The alliance is “really focused on open innovation and an open ecosystem,” said Montgomery.

This philosophy of openness is manifest in IBM’s watsonx development environment for generative AI. Watsonx incorporates numerous open-source tools and technologies, and it’s built to facilitate collaborative development. Its governance module, watsonx.governance, provides end-to-end monitoring to accelerate AI workflows that are responsible, transparent and explainable.  

Beyond increasing trust and safety, IBM and its partners contend that open development pathways will foster greater competition in the AI marketplace – leading to more innovation, economic growth and jobs.  The idea is that open models make tools, resources and knowledge accessible and affordable to more people, increasing the size of the talent pool that can contribute ideas.

A technician oversees automated robotic arms working on an assembly line in an industrial manufacturing facility.

Regulation and national competitiveness

Creating an environment where innovation can flourish will also help to maintain the U.S.’s position as a world leader in AI, which in turn bolsters national security and competitiveness at a time when some nations are looking to weaponize the technology against adversaries. “We absolutely think the best way to protect national security is by being a leader in AI and fostering an innovative environment to foster and grow talent,” said Montgomery.

“It’s critical that we shy away from really prescriptive licensing regimes, which will stifle innovation.”

– Christina Montgomery, Chief Privacy and Trust Officer, IBM

Regulation also has a role to play in keeping AI safe and trustworthy, and IBM officials contend that these should govern risk as opposed to the technology itself. “Companies and others deploying AI technology should be held accountable for the AI they’re putting out there, particularly in cases where it could have an impact on somebody’s fundamental rights or health,” said Montgomery. At the same time, “I think it’s really critical that we shy away from really prescriptive licensing regimes, which will stifle innovation,” she said. IBM has long advocated for what it calls “precision regulation,” focused on risk, protecting open innovation, and supporting liability over blanket immunity.

In addition to judicious regulation, IBM and its partners believe that the federal government can help shape an environment in which AI innovation thrives through standards organizations such as the National Institute of Standards and Technology and through research. “The government can really take a role in ensuring that we’re incenting a good, publicly funded research system,” said Partnership for AI’s Finlay, noting that many technology breakthroughs, from smartphones to search algorithms, got their start this way.  “That’s what can drive innovation upstream, which can drive innovation downstream,” she said.

The bottom line, according to IBM, is that AI that is open and transparent will be a net benefit to the world and has the potential to change how we live for the better while minimizing potential harms. “There’s a lot of progress to be made in understanding the risks and helping to address the risks and helping to establish the future in a responsible way,” said Montgomery.


Open and transparent development can help businesses thrive with AI while reducing risks.