A force for good
How businesses can drive trusted AI that can improve our lives
By WP Creative Group
May 31, 2024
Generative AI has the potential to revolutionize how business is conducted in countless industries. From writing code and generating marketing strategies, to maintaining legacy applications and modernizing systems around the world, the possibilities seem limitless.
But implementing this transformative technology can have risks — for both the service provider and the end user. Use of AI models have the potential to perpetuate harmful biases and spread misinformation if not implemented and managed with care. An improperly implemented and governed AI service could reinforce racial or gender-based stereotypes — if it’s informed by data that carries those biases.
The good news, however, is that for anyone looking to deploy AI effectively and ethically, there are more solutions available today than ever before.
There are pre-built, off-the-shelf AI models (that can still be customized), new data techniques that can help reduce risk while improving output, as well as open-source, community-based collaborations that could drive the generative AI movement.
A human-first approach to AI is a big part of IBM’s efforts to establish AI solutions, and much of this thinking is embodied in tools found in its watsonx AI platform. Ethical, human-first approach to AI is about the technology being a net benefit for humankind.
“We’re developing a lot of techniques and technologies that are focused on trying to understand, detect and mitigate different risks and harms that can arise from traditional machine learning as well as now with large language models and foundation models,” says Kush Varshney, an IBM Fellow who directs the company’s Human-Centered Artificial Intelligence and Trustworthy Machine Intelligence teams.
Transparency and collaboration at the forefront
Explainability is fundamental to making AI transparent. It refers to the capability of different users to understand and articulate how an AI model arrived at a specific result. Many companies developing AI models design explainability primarily for developers, but that leaves behind other key constituents.
“When we think about explainability, we first start with different personas,” says Varshney. “Those could be the end user, it could be the affected party about whom decisions are being made or it could be regulators who are trying to ensure the safety or the right functioning of a model.”
Varshney points to the example of a loan applicant who’s been denied. For the applicant, knowing why the model made that decision is important, and is a much different aspect of explainability than, say, for the banker who assessed the risk, or a regulator establishing guidelines for the entire industry — or even an AI developer who wants to check that the system is generating results as expected.
Varshney emphasizes the need to place the end user at the forefront here as well: In certain AI development processes, the developers themselves are the ones setting ethical standards. The issue is that every company, country, culture or user has a different understanding of what “ethical” means, as well as different priorities.
For IBM, it’s essential to allow policymakers at companies to determine the values that their AI systems will adhere to, with developers assisting them to monitor whether the AI model reflects those agreed upon goals. “The data scientists or folks like that shouldn’t be the ones determining the policies,” says Varshney. “A big part of our methodology is to ask just a few precise questions in order to elicit the different risks that an organization should worry about.”
AI to serve people
A human-first, ethical approach to AI is about serving individual causes or solving larger challenges that can move humanity forward. But as with any new technology, the generative AI tools themselves can often become the focus, rather than the uses. This is something that IBM and others are actively guarding against.
To help move AI in this direction, IBM and over 85 other organizations across the technology industry and academia have come together to launch the AI Alliance, which aims to drive alignment across the AI field on key concepts and best practices. The group is united in a shared mission to advance safe and responsible AI rooted in open innovation.
The commitment to ensuring AI work is conducted in the open has also manifested in projects like InstructLab, which IBM and Red Hat recently open-sourced to the community. The goal is to put LLM development into the hands of developers and make fine-tuning a large language model (LLM) as simple as contributing to any other open-source project.
IBM has also open-sourced a family of Granite code models. These models are released under an Apache 2.0 license. The Granite 7B-lab English language model has already been integrated into the InstructLab community, where users can contribute the skills and knowledge to collectively enhance the model at speeds previously unheard of in AI model development.
By sharing insights on model’s training and inviting others to participate in the project, IBM and Red Hat are creating a living, breathing example of collaborative AI.
“AI is a representation of human knowledge and human creativity,” says Varshney. “So openness is a way to expand the breadth of human experience showcased by the technology.”
A new form of data to help preserve privacy
To help build AI models efficiently, IBM has turned to synthetic data. This is data generated by AI that mirrors real-world data for training another AI model, explains Kate Soule, program director of generative AI research at IBM.
In an industry like finance, companies can help protect consumer privacy by stripping out identifiable information and instead using synthetic data that mimics consumer data to train a model. This ability to protect privacy is important not only in the financial industry, but healthcare, legal practices and virtually any industry where consumer data is exchanged. It’s also valuable when there just isn’t enough real data to effectively train a model, which is often the case for many healthcare applications.
The process of creating synthetic data is also helpful in and of itself. While generating synthetic data, an organization can clarify the cause-and-effect relationship between events and filter out correlations and other poor statistical signals that may result in less accurate model outputs.
“Real data has real problems,” says Soule. “It’s limited and expensive. It can be biased. With synthetic data, you can improve upon and enrich data in ways that you wouldn’t be able to do if you were just trying to run an experiment or collect data in the real world.”
The bias Soule refers to results when a model is trained using real-world data that reflects human biases from the environment in which it was collected, or data that is limited in scope — inputs from only one country or culture, one company or one type of end user. By training a model on inherently biased data, the model then creates a continual bias effect for all future data generated, creating distorted end results. “Synthetic data isn’t a silver bullet solution” Soule cautions, but it is becoming an incredibly valuable tool to help mitigate some forms of bias in AI systems.
A thoughtful approach
Travis Rehl, chief technology officer and head of product at Innovative Solutions, regularly helps business leaders approach AI thoughtfully, which usually means taking a beat to determine best practices and ideal rollouts. Innovative Solutions is a cloud and data expert and IBM partner that helps startups and small to medium-sized businesses build, implement and manage data projects in the cloud, a process that is enhanced with powerful AI models. The team began using generative AI three years ago and now relies on watsonx to monitor, track and record their performance.
watsonx enables the creation of AI models that are trained and tuned with carefully curated data, from a customer’s internal or external sources. Through watsonx, enterprise users can filter, fine tune, train and vet data that drives their generative AI models, applications and services.
While watsonx helps Rehl and his team manage AI, their process for determining ideal use cases combines software, hardware and a human touch. “Honestly, we like to do it live,” he says. “We like to understand the business process, walk through the data, and get agreement on what ‘accuracy’ really means for our customer’s business. Then we’ll have engineers test their idea in real-time.”
As long as key individuals continue to embody values like transparency, safety, ethics and collaboration, business leaders will have a clearer path for success and society as a whole will be more assured of the benefits of AI.
Learn more about IBM’s approach to ethical AI