True paradigm shifts are rare, which helps to explain the buzz around ChatGPT, a chatbot driven by so-called generative artificial intelligence that promises to revolutionize the way people interact with computers. It’s become a global sensation since its November launch by giving seemingly sophisticated yet plain-language answers to almost any kind of question. Technology giants such as Microsoft Corp., Google and Baidu Inc. are betting heavily on this new technology, which has the potential to upend the lucrative search market, even as its wider use is turning up potentially serious flaws.
1. What is generative AI?
These systems use neural networks, which are loosely modeled on the structure of the human brain and learn to complete tasks in similar ways, chiefly through trial-and-error. During training, they’re fed vast amounts of information (for example, every New York Times bestseller published in 2022) and given a task to complete using that data, perhaps: “Write the blurb for a new novel.” Over time, they’re told which words and sentences make sense and which don’t, and subsequent attempts improve. It’s like a child learning to pronounce a difficult word under the instruction of a parent. Slowly, they learn and apply that ability to future efforts. What makes them so different to older computer systems is that the results are probabilistic, meaning responses will vary each time but will gradually get smarter, faster and more nuanced.
2. How does ChatGPT work?
ChatGPT is the latest iteration of GPT (Generative Pre-Trained Transformer), a family of text-generating AI programs developed by San Francisco-based laboratory OpenAI. GPTs are trained in a process called unsupervised learning, which involves finding patterns in a dataset without being given labeled examples or explicit instructions on what to look for. The most recent version, GPT-4, builds on its predecessor, GPT-3.5, which ingested text from across the web, including Wikipedia, news sites, books and blogs in an effort to make its answers relevant and well-informed. ChatGPT adds a conversational interface on top of the program. At their heart, systems like ChatGPT are generating convincing chains of words but have no inherent understanding of their significance, or whether they’re biased or misleading. All they know is that they sound like something a person would say.
3. Who is behind OpenAI?
It was co-founded as a nonprofit by programmer and entrepreneur Sam Altman to develop AI technology that “benefits all of humanity.” Early investors included LinkedIn co-founder Reid Hoffman’s charitable foundation, Khosla Ventures and Elon Musk, who ended his involvement in 2018. OpenAI shifted to create a for-profit entity in 2019, when Microsoft invested $1 billion.
4. What’s been the response to ChatGPT?
More than a million people signed up to use it following the launch in late November. Social media has been abuzz with users trying fun, low-stakes uses for the technology. Some have shared its responses to obscure trivia questions. Others marveled at its sophisticated historical arguments, college “essays,” pop song lyrics, poems about cryptocurrency, meal plans that meet specific dietary needs and solutions to programming challenges. The flurry of interest also raised the profile of OpenAI’s other products, including software that can beat humans at video games and a tool known as Dall-E that can generate images – from the photorealistic to the fantastical – based on text descriptions.
5. Who’s going to make money from all this?
Tech giants like Microsoft have spotted generative AI’s potential to upend the way people navigate the web. Instead of scouring dozens of articles on a topic and firing back a line of relevant text from a website, these systems can deliver a bespoke response. Microsoft deepened its relationship with OpenAI in January with a multiyear investment valued at $10 billion that gave it a part-claim on OpenAI’s future profits in exchange for the computing power of Microsoft’s Azure cloud network. In February, Microsoft integrated a cousin of ChatGPT into its search engine Bing. The announcement was a challenge to rival search giant Google, which responded by trailing a launch of its own conversational AI service, Bard. However, questions remain about how to monetize search when there aren’t pages of results into which you can insert ads.
6. How’s the competition going?
OpenAI spent the months since unleashing ChatGPT refining the program based on feedback identifying problems with accuracy, bias and safety. ChatGPT-4 is, the lab says, “40% more likely” to produce factual responses and is also more creative and collaborative. In Bloomberg tests, it still struggled to compose a cinquain poem about meerkats and regurgitated gender stereotypes. Google’s Bard got off to a rocky start when it made a mistake during a public demonstration in February, which sparked concerns that the company had lost ground in the race for the future of search. China’s Baidu disappointed investors with a demo of its “Ernie Bot” on March 16, showing only a scripted video of interactions with the AI. Facebook parent Meta Platforms Inc. was hurrying to put together a generative AI product group from teams that were previously scattered throughout the company.
7. What other industries could benefit?
The economic potential of generative AI systems goes far beyond web search. They could allow companies to take their automated customer service to a new level of sophistication, producing a relevant answer the first time so users aren’t left waiting to speak to a human. They could also draft blog posts and other types of PR content for companies that would otherwise require the help of a copywriter.
8. What are generative AI’s limitations?
The answers it pieces together from second-hand information can sound so authoritative that users may assume it has verified their accuracy. What it’s really doing is spitting out text that reads well and sounds smart but might be incomplete, biased, partly wrong or, occasionally, nonsense. These systems are only as good as the data they are trained with. Stripped from useful context such as the source of the information, and with few of the typos and other imperfections that can often signal unreliable material, ChatGPT’s content could be a minefield for those who aren’t sufficiently well-versed in a subject to notice a flawed response. This issue led StackOverflow, a computer programming website with a forum for coding advice, to ban ChatGPT responses because they were often inaccurate.
9. What about ethical risks?
As machine intelligence becomes more sophisticated, so does its potential for trickery and mischief-making. Microsoft’s AI bot Tay was taken down in 2016 after some users taught it to make racist and sexist remarks. Another developed by Meta encountered similar issues in 2022. OpenAI has tried to train ChatGPT to refuse inappropriate requests, limiting its ability to spout hate speech and misinformation. Altman, OpenAI’s chief executive officer, has encouraged people to “thumbs down” distasteful or offensive responses to improve the system. But some users have found work-arounds. Generative AI systems might not pick up on gender and racial biases that a human would notice in books and other texts. They are also a potential weapon for deceit. College teachers worry about students getting chatbots to do their homework. Lawmakers may be inundated with letters apparently from constituents complaining about proposed legislation and have no idea if they’re genuine or generated by a chatbot used by a lobbying firm.
--With assistance from Alex Webb and Nate Lanxon.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.