There was once a curious little girl with bright pink hair who loved to climb trees. One day, the little girl met an old man, who gave her fruit from a baobab tree. The fruit was delicious. So the girl set off to find the tree.
We’re not going to tell you what happens next, though. Wouldn’t want to ruin the ending.
This story of the pink-haired child and her fruit-focused adventure is told through an app created in a Gallaudet University lab that aims to give deaf children something quite valuable — easy, early access to American Sign Language.
“People like me, deaf people, don’t ask to be fixed,” said Melissa Malzkuhn, founder and creative director of the Motion Light Lab. “We just ask to be able to thrive.”
In this lab at Gallaudet, the private university for the deaf and hard of hearing in Northeast Washington, research and innovation turn into resources for children and families. There is so much out there for hearing children, Malzkuhn said through interpreters. But much of what is available is sound-based.
“Which is great, there’s been beautiful work done, lots of wonderful applications, but they have absolutely zero benefit for deaf children, who are visually oriented,” Malzkuhn said. “So that’s where this lab comes in.”
Launched in 2009, the lab in recent years has developed “The Baobab,” the story of the young girl, which has been translated into Russian, Japanese and other languages. It is also home to similar projects known as VL2 Storybook Apps. There is “The Blue Lobster,” which follows the same adventurous child. “The Museum of Errors” features wordplay. “The Boy Who Cried Wolf” and “The Little Airplane That Could” are new spins on classic tales.
“There really aren’t that many resources out there for deaf children,” Malzkuhn said. “Especially when you’re talking about technology.”
The bilingual storybook apps offer vivid, colorful illustrations of dogs and airplanes and pink-haired heroines. As the stories progress, children can press highlighted words for a video of someone signing and fingerspelling. They can also watch a video of a story told through ASL.
The lab is also using motion-capture technology to develop a more authentic signing experience. A video of an ASL nursery rhyme, done in collaboration with a lab in Paris, shows why that matters: the system can create clear, expressive language delivered through a three-dimensional character.
Motion capture is used to show movement — usually dance, sports, that kind of stuff. Capturing gestures, though, is a bit more complicated. Typically, Malzkuhn said, the lab’s system has about 50 markers, which are basically raised knobs placed along joints in the body. The lab uses more than 100 markers to make sure the finer points of gesture are preserved.
“I feel like a ninja, because it’s black and I have all these markers on, so I dress completely in black,” she said of the motion capture outfit. “The work is tedious, for putting the markers on the face, I will say that. Because you don’t just pull on a mask.”
The Motion Light Lab is part of a Science of Learning Center backed by the National Science Foundation. It works with the Brain and Language Laboratory for Neuroimaging at Gallaudet, led by Laura-Ann Petitto, whose research has found that sign and spoken language are processed by identical brain tissue.
“We used to think this part of the brain, in the left hemisphere, was only the side for processing sound. We thought that for 125 years,” Petitto said. “We thought this tissue, near the ear, of course, it must be sound-based. So I did a variety of studies . . . that tested the hypothesis. And found that it is false.”
Malzkuhn is from a deaf family, and grew up in a home in which everyone signed. Her father loved stories, especially funny ones. He encouraged his children to create their own tales — use this hand shape, he’d say, and tell me a story with it.
“So we would play with language constantly in my home, and that language play is what I am trying to basically embed in the work we’re doing here,” she said.
Norma Morán and her partner, Franklin Torres, have two young sons, Ramón Torres Morán, who is nearly 3, and Teófilo Torres Morán, who is not yet 2. She and Torres are both deaf, as are Ramón and Teófilo. The family uses the apps, and Morán can see Ramón sign information that is specific to it.
“We’re modeling it for him, but he’s also following the app,” Morán, who lives in the District, said through an interpreter. “It’s definitely a reinforcer. So there’s one character that does a lot of pointing. He points the same way. We do as well. So he duplicates what the character is doing in the app.”
Morán said she finds the apps are easy to use and help increase access to sign language. “It’s easier for us to expose them to some of these concepts earlier,” she said. “And we’re not so dependent on a school. It’s something we can do on our own time, and really reinforce the learning that they’re getting at school.”
Austin Carrigg, another D.C. mother, recalled telling her daughter Melanie’s ASL teacher about the time when her daughter encountered someone in an elevator who signed to her. Melanie’s eyes grew wide, Carrigg said, and she realized that her daughter believed only a few people in her world could sign.
The storybook apps help the 4-year-old, who is deaf, and the mother, who is not, understand the emotions behind the language.
Carrigg said her daughter is now “very, very stingy” with the iPad. Melanie carries it with her constantly, Carrigg said, often with storybook apps open.
“We need more support for things like this,” Carrigg said. “If they could get funding to do hundreds of these, think about the way it can change kids’ lives.”