Artificial intelligence is driving real business innovation
How organizations can build and scale game-changing generative AI services
By WP Creative Group
June 14, 2024
From boosting enterprise productivity to improving educational services, AI is having a transformative impact on businesses and society. Its potential to drive progress is touching a wide range of organizations, from small startups to multinationals and government agencies.
“If you look at generative AI and where it’s going, it is one of the most transformative technologies that has come around in decades,” said Sri Elaprolu, Head of the AWS Generative AI Innovation Center, speaking at a recent Washington Post Live event. Indeed, organizations are already using generative AI to drive applications that improve operations, increase customer loyalty, boost productivity and accelerate innovation.
So how can organizations make the most of generative AI? Elaprolu said AWS sees three broad use cases.
The first is reimagining customer and stakeholder experiences through new products, innovations and interfaces. Elaprolu cited the example of the online learning platform Coursera, which works with more than 300 educator partners to offer over 7,000 courses and certifications.
“Generative AI is one of the most transformative technologies in decades.”
– Sri Elaprolu, Head of the AWS Generative AI Innovation Center
AWS worked with Coursera to build a solution that provides insightful, structured feedback and fair evaluations to students, powered by generative AI, and foundation models to improve the user experience. The solution quickly infuses and generates feedback and insights for students to help them achieve their objectives.
As part of this, Coursera used Retrieval Augmented Generation (RAG) to generate feedback – referencing an authoritative knowledge base so that the models’ responses are accurate and responsible.
“You want that material to be the source, not something that’s made up or is a hallucination,” said Elaprolu. Overall, Coursera has “enhanced the speed and the efficiency of the workflow that they have in place for student assessments on a grand scale,” said Elaprolu.
In a second popular use case, organizations are using generative AI to boost employee productivity. For instance, AWS helped Data Friendly Space, a Virginia-based organization that provides humanitarian groups with data and analytics tools, to build its GANNET solution. GANNET offers a generative AI assistant that helps users employ data sources more effectively and efficiently. That’s critical, as these organizations must often deal with vast amounts of complex data to fulfill their missions.
“The platform allows you to surface insights that humanitarian organizations around the globe can use to bring interventions and put effective policies in place,” said Elaprolu.
AWS customers are also using generative AI to improve internal efficiency. To that end, AWS is working with Pfizer to help the pharmaceutical giant use LLMs to generate drafts for patent applications and produce medical and scientific content that is reviewed by experts for accuracy.
Planning and prioritizing are key
Going from a proof-of-concept model to a working solution that supports millions of users while running billions of inferences takes careful planning. Given the vast range of possibilities for AI, organizations must develop a methodology for prioritizing which projects to invest in. One of the most useful of these is return on investment (ROI), which provides a measure of internal accountability. “ROI is something that you want to keep in mind, because there are a lot of shiny experiments that you can drive,” said Elaprolu.
With the technology rapidly evolving, organizations must also be agile when it comes to choosing and implementing AI software and services. “The foundation models are improving almost weekly now, so just keep up with that,” said Elaprolu, emphasizing that organizations must be prepared to evaluate new models as they emerge so as to not get left behind.
Developers should also prioritize security and privacy in the early planning stages. Baking security and privacy into the solution from the beginning helps to ensure that it will act in the best interest of users. “You want to be building securely and responsibly from day one,” Elaprolu said. And organizations must continue to test and improve the new service once it is in production.
Creating generative AI solutions inevitably involves tradeoffs. This often comes down to balancing cost, accuracy and latency. Choosing which to prioritize depends on the intended use. Some applications, such as real-time chatbots, must be instantly responsive so latency cannot be sacrificed. And they must be highly accurate. In such an example, controlling costs may take a lower priority as organizations must invest the funds necessary to ensure a great user experience.
Conversely, a back-office application, such as batch processing invoices, may be able to sacrifice some speed in favor of cost savings and accuracy. “Don’t settle on one approach for all things,” said Elaprolu.
“Don’t settle on one approach for all things.”
– Sri Elaprolu, Head of the AWS Generative AI Innovation Center
Trusted foundations
One of the biggest concerns about AI is risk to the organization. These risks can be operational, financial or strategic. Therefore, it’s imperative for organizations to create a robust risk assessment program that covers all their AI initiatives. This can help to improve decision-making, increase compliance and build trust with users. “Innovation is important. It’s going to unleash a lot of new creative solutions, but if you’re not going to do that securely and responsibly it’s not going to work for you in the long run,” said Elaprolu.
The good news, he said, is that “it is absolutely possible to achieve both and do it at scale. We’re seeing companies, enterprises, public sector organizations do it daily.”
With more organizations implementing robust risk management programs around AI, public confidence in the technology is growing. Seventy percent of consumers now believe that the benefits of generative AI outweigh the risks.*
Also helping to boost confidence is the fact that LLMs are becoming more accurate, thanks to new tools and techniques.
“For us at AWS, it is the most important thing – building gen AI solutions and services responsibly and securely. And not only us doing it, but we are extending a lot of capabilities to our customers so that they can build solutions responsibly and securely as well,” said Elaprolu. AWS’s generative AI toolbox includes guardrails which help to ensure that output from an LLM is conforming to an organization’s policies and guidelines. Guardrails can be used to filter out toxicity, bias and key phrases.
The future of AI
It’s still early days for generative AI, but the technology is already driving innovation at a mind-boggling pace. The next phase of its evolution will see intelligent, AI-based agents working in concert to automate entire workflows, such as analyzing, summarizing and taking action on order processing. “We’re seeing a significant increase in interest and actual implementations using agents,” said Elaprolu.
Generative AI is also quickly evolving to include multi-modal capabilities that go beyond ingesting and generating text. It’s becoming easier to use images, audio and video both to prompt a model or generate outputs. This opens new possibilities for automation in marketing, content production, image classification and more.
The bottom line is that the potential applications of generative AI are almost limitless. “Think big, think really big,” said Elaprolu. “And if you need any help, we’re here.”
Sources
* KPMG