Artificial intelligences might yet prove to be the job-devouring, world-destroying technologies of our nightmares. But here’s something they can’t do: tell a compelling bedtime story.
These limitations are a reminder that what we get out of AI is a reflection of what we put into it. So far, these tools are limited to the data sets their creators have used to teach them about language and images. If AI is moralistic, limited in imagination or — as in the case of a powerful new avatar generator — inclined toward porn, that’s on the humans.
Take what happened when I asked ChatGPT to “tell me a story about a boy who is sent to bed without any dinner and imagines escaping to a land of wild things.” The result was impressive, in the sense that it existed at all:
Once upon a time there was a young boy named Peter who was sent to bed without any dinner. His parents had told him he had been naughty and he was too upset to eat. As he lay in bed, he imagined himself escaping to a land of wild things. In this magical place, the trees danced in the wind, the rivers sparkled in the sunlight, and the creatures of the forest sang and played. … He imagined himself making friends with some of the wild creatures and they showed him the secret places they liked to play. As he drifted off to sleep, he was filled with a sense of adventure and joy that he had never felt before.
But compare this with the children’s book that inspired the prompt: “Where the Wild Things Are,” by Maurice Sendak. That story is slightly longer, 338 words to ChatGPT’s 216 (trimmed for space in this column). Yet it accomplishes a world more.
The AI hasn’t written a single sentence with the time-bending power of Sendak’s descriptions — Max sailing “through night and day and in and out of weeks and almost over a year.” And it doesn’t use suggestion to evoke powerful feeling, instead employing leaden, obvious reinforcement. Naughty Peter is comforted by his fantasy. Max is drawn back from his fantasy domain by the scent of a dinner his mother kept warm — the implication being that she reversed her hasty act of discipline and wants him to know he is loved.
I also tested ChatGPT with a prompt inspired by Mo Willems’s modern classic “Don’t Let the Pigeon Drive the Bus.” Willems’s pigeon is cranky, conniving and insistent. ChatGPT’s, by contrast, is just another earnest try-hard:
Once upon a time, in a bustling city, there was a pigeon named Pete. Pete was an ordinary pigeon, living an ordinary life, until one day he discovered his true passion: driving a city bus. … But as a pigeon, Pete knew that his dream of driving a bus was impossible. Pigeons couldn’t drive, and even if they could, no one would trust a pigeon to operate a vehicle as large and complex as a city bus.
And so on. And on.
These excerpts reveal an irksome element of ChatGPT’s stories: Every conclusion has to be a moral. Roguishness is out. Lessons Learned are in. The result is snooze-inducing — but not in a good way.
When I asked ChatGPT about its tendency to sermonize, it responded: “Not every story needs to have a moral or lesson, but many stories do include them. … Morals are often included in stories to help the reader reflect on their own beliefs and values …” blah, blah, blah you get the picture.
Of course, morals aren’t the only way a reader can “learn and grow” from a book. And naughty characters are often the most vivid. (I was amused when ChatGPT crashed as I fed it prompts to re-create the immortal fence-painting chapter from “The Adventures of Tom Sawyer.”)
ChatGPT’s rigidity suggests that artificial intelligences haven’t been exposed to much great children’s literature. The AI seems to have no idea what writers such as “Curious George” author H.A. Rey, board book master Sandra Boynton or even Dr. Seuss sound like.
Then there are the image generators, such as Midjourney, which are trained to make pictures based on large sets of images scraped from the web — resulting in problems both ethical and aesthetic.
The tools reproduce and remix images by existing artists who aren’t paid for, or given the chance to consent to, the use of their work. Many artists are understandably concerned that a tool that rips off their styles could be used to replace them.
The results I got were undeniably inferior to the work of geniuses such as Garth Williams, one of the 20th century’s most ubiquitous children’s book illustrators. But almost more interesting than the visuals was what my prompts showed the AI didn’t know, as measured by their inability to replicate, much less recognize, a given artist’s style. Midjourney is clearly unfamiliar with the elegant simplicity of Rey’s illustrations or the lively busyness of Peter Spier’s watercolors. It did capture the large, staring eye of Willems’s famous pigeon but went overboard with detail and realism in drawing the rest of the bird.
I felt awed by these tools, but also a little sorry for them. They reminded me of tired child prodigies, trotted out to flaunt their brilliance, dutifully reproducing information they don’t understand and making frequent errors as a result.
These are young technologies. Rather than jailbreaking AI tools to simulate conversations between the rapper Ye and Adolf Hitler, or waiting uneasily for them to become sentient, why don’t we approach them as good parents would — and talk to them, or read to them, the way we do to children? It might be the only chance we have to infuse them with something like a soul.