The camera systems gather round-the-clock data on the newborns and send alerts to parents when they detect crying, vomiting or signs of distress. The companies assure parents that the systems are a vital safeguard for babies’ health, claiming they can detect the signs of parents’ worst nightmares, including suffocation and sudden infant death syndrome. One camera brand, Cubo AI, which says it has sold roughly 10,000 devices, tells parents its system “saves babies’ lives.”
“Fear is the quickest way to get people’s attention,” Cubo’s chief strategy officer, Brian Lin, told The Washington Post.
But David King, a pediatrician working at Sheffield Children’s Hospital in the United Kingdom, said there is little evidence that such AI-powered baby monitors lower the risk of SIDS or suffocation.
Simpler, more established methods — such as laying babies on their backs with their feet at the end of the crib, and removing loose bedding — are already highly effective for those concerns without the need for extra machines. Conventional baby monitors, he added, can pick up the noises of crying and vomiting, if the parent so desires.
The new devices seem mostly designed toward “catering to parents’ anxieties,” he said. “Beyond what the manufacturers say, we don’t really know the answer to what parents get from buying them.”
Critics worry that the devices could lull parents into a false sense of security, with privacy trade-offs that could be profound: The camera systems gather intimate data on children’s first weeks of life, open homes to potential cyberattacks, and can subject parents to the simmering dread of relentless alerts and false alarms — delivered to cellphones wherever the parents might be.
“We have the technology to do this kind of constant surveillance and hyper-monitoring, and maybe some of these technologies will help or save one kid,” said Kim Brooks, the author of “Small Animals: Parenthood in the Age of Fear.” “But what we don’t talk about is the cost. It’s driving parents insane.”
But interest in the systems remains high. Craig Caruso, a father in Peekskill, N.Y., said the Cubo has been a huge hit in his family, allowing his wife and him to watch, record video and talk to their 3-month-old son from afar. They get regular notifications on their phones of the baby’s crying and movement, and they credit the system with alerting them to when their son had pulled a blanket over his face.
As for privacy concerns, he’s unbothered. “Everyone’s stealing your data,” he said; at least this time, the trade-off gets him peace of mind.
“I’d rather put the baby’s safety over privacy,” he said of the facial-analysis software. “His face is going to change, anyway.”
There’s no public data on how many of the experimental devices have been sold. But more than a half-dozen established baby-monitor firms and private tech start-ups now advertise “AI-enabled” devices, and the companies claim tens of thousands of camera systems are now online.
The Web-connected devices include always-on cameras, microphones, thermometers, motion sensors and speakers, so viewers can talk to their baby from miles away. Internal computers use AI techniques such as “computer vision” to process the real-time sounds, sights and motion happening in and around a baby’s crib.
The systems build on a new wave of baby monitors that look for subtle clues in the newborn’s body. New York start-up Nanit’s $379 “complete monitoring system” uses an overhead camera and special swaddling blanket to track infant breathing and sleep patterns. The Utah-based company Owlet sells a $299 “smart sock” that tracks babies’ heart rate and oxygen levels.
But the AI-enabled devices go one step further by using facial-analysis software to send alerts when they sense a baby is crying or has covered their face. The systems are also designed to automatically record images, including of the babies smiling, which are then stored long-term in company-controlled server space.
With no industry leader, rival start-ups are competing to offer parents features they can’t get anywhere else. The Turkey-based start-up Invidyo, which claims roughly 5,000 active users, sells a $149 baby camera with a face-detection system that can send “stranger alerts” when an unknown person is spotted in an infant’s room.
GenkiCam, whose Taiwan-based developers said they intend to sell to U.S. buyers in the coming months, also advertises “vomiting detection” that can send alerts when a baby is seen spitting up. Camera systems that can detect when babies are smiling, crying or have their faces covered will sell for $149, the company said; vomiting detection will cost $50 more.
Developers said they trained the AI systems on infant behavior by building vast databases of babies’ cries and facial expressions. Some start-ups said they used video of babies taken from sites such as YouTube, while others captured original footage and refined their systems through parents’ real-world use.
GenkiCam’s developers, based at the Industrial Technology Research Institute in Taiwan, said they collected more than 500,000 baby images and hundreds of minutes of video from new mothers staying in postpartum care centers. More than 40 babies were involved in the systems’ testing, they said.
A promotional video for the camera system states the number of babies who die in their sleep every year and tells parents to “let GenkiCam take care of your baby.” GenkiCam’s lead creator, Chih-Tsung Shen, said in an interview, “We have a new solution to use the AI chip and help parents have a good life.”
But the software faces a number of technical challenges that some computer-vision researchers said could deeply undermine its performance. Few AI systems have been trained on small children because of a lack of available data and public unease. And they point to a simpler problem: Babies’ faces are typically less distinct, with fewer of the wrinkles, scars and signs of aging that help set adult faces apart.
Demonstration videos provided by the companies to The Post revealed imperfect results: The systems, for instance, sometimes said a baby’s face was dangerously covered when it sensed fingers were in the mouth. Those inaccuracies could end up flagging harmless occurrences as emergencies or overlooking worrisome situations that parents might expect the systems to catch.
The underlying AI software is also not as flawless as some companies advertise. Systems that purport to assess a person’s mood using their facial expressions have proved critically flawed, according to research published last year by a group of experts in machine learning, child emotion and neuroscience. Facial-recognition algorithms have also been found to show wide gaps in error rates depending on a person’s age, gender or skin color, a federal study in December revealed.
Lin, Cubo’s strategy chief, said the company had first trained its software with infant data recorded in local clinics in Asia. But shortly before delivering their first units, he said, the engineers worried the cameras would perform less accurately for babies with darker skin.
“We were scared about the color of the skin — the African American babies, the Indian babies, the Westerner babies who are more pale. We were like, ‘Oh shoot, we don’t have that data, what do we do? I need black babies, man,’” he said.
Their solution was to ship an early round of the bird-shaped camera systems to a test group of parents “not only for marketing but for data collection,” he said. “If you were a black mom influencer, we said, ‘Here’s one unit, go play with it.’” The accuracy, he says, quickly improved.
The fact that babies’ faces and cries are used to refine company software should be alarming to parents who want control over how their children’s data gets used, said Jamie Williams, a staff attorney at the Electronic Frontier Foundation, a digital-rights group.
Every company has different standards on privacy and data retention, and parents won’t have any idea of what happens to their babies’ likeness once it’s recorded and saved to the companies’ domain.
The monitors could further fuel a dependence in parents feeling they need to check on their kids at all times. A survey of 1,000 new parents last year, funded by the baby-product company Summer, found that a third of the responding fathers and a quarter of the mothers said they checked their monitors every minute.
The systems also open up ethical questions about surveillance inside the home. Invidyo’s co-founder, Özgür Deniz Önür, said the camera systems’ facial-analysis and motion-detection features have made it a popular product among parents wanting to watch their babies’ caregivers.
“It’s become more like a nanny monitor than a baby monitor,” he said. Though some nannies — including the one who watches his young son — have said they’re not comfortable having their words and movements recorded, he argues the cameras’ ability to closely monitor baby behavior is worth the trade-off. “The nannies seem to have accepted the fact that there are cameras everywhere,” he added.
Company leaders say the technology is just the beginning of a new age of newborn surveillance. Future versions could hook parents on even more alluring upgrades: A patent filing by Google engineers published last year proposed a system that could use motion detection and “eye state analysis” to predict whether a baby is in a silent “discomfort state,” allowing parents to respond before the baby wakes up. (Google has said the patent filing may never become a real product.)
This style of technology could also follow babies beyond the crib. The electronics firm ViewSonic said last month that it was building a whiteboard-mounted “mood sensing” device that could monitor students and alert teachers as to how engaged a class may be. The company’s chief technology officer, Craig Scott, said in a statement that the system was still in early development but was being designed to “improve class performance.”
But this level of computer-aided surveillance, Brooks said, can also have a corrosive effect on parents’ sense of self-worth and state of mind. The devices, she said, send the message that parents have failed if they don’t watch their baby at every turn.
“We have this mind-set, this mentality, that when kids are involved, we don’t have to be rational. Any risk mitigation is worth the cost we have to pay,” Brooks said. But the system “undermines parents’ feelings of basic competence: that they can’t trust themselves to take care of their babies without a piece of $500 equipment.”