Here’s how not to build consumer trust in smart devices known as the Internet of Things.
A recent news report describes how the inventor of a smartphone-enabled garage door opener called Garadget, funded through the crowdsourcing platform Indiegogo, got annoyed when a customer experiencing trouble with the app posted an angry comment on the company’s community board and, later, a 1-star review on Amazon.
In response, the company’s founder simply disabled the product remotely, denying connection to the company’s server and effectively making the device useless. “At this time your only option is return Garadget to Amazon for refund,” the company wrote back tersely, in an exchange that has, of course, gone viral.
(The company updated the post today, noting that “firing” the customer was “admittedly not a slickest PR move on my part.”)
IoT companies will have to do better than that. After all, trusting the safety and security of your garage door, especially to a start-up company, is already a daunting proposition for any consumer, let alone one with a founder with as bad a temper as its users.
It’s not the only recent example of cringe-worthy behavior making consumers anxious. Just last month, security researcher Troy Hunt documented the chilling story of CloudPets, a connected stuffed animal that allows children and their parents to record messages for each other over the Internet. Except that millions of messages going to and from the toy to cloud-based servers weren’t even password protected, let alone encrypted. Hackers at one point stole the whole database, holding it for a ransom the company apparently never paid. (It had a backup.)
And even after it was discovered last year that hackers could easily hijack unsecured surveillance cameras and older model set-top boxes, there are still reports of such devices being taken over and used to launch attacks against popular Internet sites.
IoT data collection and use disasters aren’t limited to start-ups. While leading IoT developers including Amazon, Apple and Google, along with car manufacturers developing autonomous vehicles, are investing heavily in state-of-the-art cybersecurity, no one is invulnerable to lapses and breaches. SmartThings, Samsung’s home hub, acknowledged vulnerabilities in its software in 2015. Still, researchers found more problems a year later.
Among other benefits, the IoT may help an aging population stay in their homes much longer and improve energy use by devices, buildings and vehicles. But these very public incidents, as I wrote last year, are slowing growth for the nascent industry.
It’s bad enough when smart toys, thermostats and other gadgets are open to being hacked. When what’s at risk is your front door, your clothing, your mattress, your car and perhaps someday sensors in your body measuring blood pressure and other vital statistics, user health and safety can be very much at stake.
The good news is that there’s plenty of incentive for entrepreneurs to solve these embarrassing security gaps, as well as more challenging ones waiting in the wings. The IoT — from fitness trackers and smartwatches to connected refrigerators — already generates $200 billion in revenue. But what’s really attracting innovators and their investors is its future potential, which some estimate to be as much as $2 trillion within five years.
For now, industry groups and IoT security providers, such as Ayla Networks, are focused on closing the biggest cybersecurity holes. Late last year, a report from the influential Broadband Internet Technical Advisory Group (BITAG) at the University of Colorado catalogued dozens of examples of worst practices (or just plain incompetence) by IoT developers eager to get products to market that exposed user data or created vulnerabilities in devices already in homes.
BITAG’s recommendations were shockingly obvious, including delivering products with up-to-date software, taking advantage of easily accessible encryption tools, and, you know, actually testing products before shipping them.
Beyond these and other technical fixes, IoT product designers could greatly reduce the incidence and fallout from future security breaches by adopting two decidedly low-tech solutions: minimizing the amount of data they collect and retaining that information only as long as necessary:
Minimization — The best way to protect customer information from unwanted and unintended disclosure is not to collect it in the first place. IoT devices should only record data that they need and, where possible, do so on an anonymized basis where data stored in the cloud is not tied to specific customer identifiers. For most uses, specific identification isn’t needed, but engineers either don’t think through the implications of collecting unneeded data or of how identifiers can be avoided and still achieve product design objectives.
Retention — If personally identifiable data is collected, the best way to ensure it doesn’t leak out by accident or criminal intervention is to get rid it of it as soon as it’s no longer needed. But again, product designers aren’t likely to spend time thinking through the deletion of data at all, let alone adopt aggressive retention policies. The same goes for third-party cloud providers that host applications and data on behalf of clients. Many charge by volume, leaving little reason to encourage IoT providers to practice good data hygiene.
These are design rules few IoT companies, large or small, seem to be following. Nor are they required to do so. Today, with a few specialized exceptions, there are no specific U.S. laws obligating companies to follow the best practices recommended by BITAG and others.
That may soon change. The Federal Trade Commission, the principal enforcer in the United States of “unfair or deceptive acts” by companies, is growing increasingly worried about consumer safety in the IoT, urged on by Congress to take more aggressive action.
Until now, the FTC generally sues only in cases where a company violates its own data collection and security promises to customers. Failing to practice what you preach, in other words, can be considered “deceptive.”
But the commission is now expanding its view of what it considers “unfair” as well. In 2015, for example, Wyndham Hotels settled charges brought by the agency in a controversial case that involved breaches of its computer security that exposed customer credit card information to hackers. The agency didn’t allege specific breaches of promises made by Wyndham, suggesting that repeated break-ins by hackers may be proof enough. (The agency has yet to say just what standards it considers adequate, leaving companies to guess.)
Notably, the agency is now suing some IoT companies involved in security breaches, including the connected cameras and network routers. Earlier this year, the agency sued a Taiwanese company and its U.S. subsidiary, claiming that its poor security practices could easily expose customer information. The complaint, however, didn’t mention any actual breach that has yet to occur — another first.
As the IoT becomes a bigger part of the Internet economy and poor data management practices become increasingly obvious, pressure will build for specific laws and regulations that could slow the deployment of IoT devices even more. As companies hire more lawyers to help them comply, new products may become more expensive as well — a particular problem for start-ups.
So, as with all disruptive innovations, the better the nascent IoT industry can do at aggressive self-policing and self-regulation, the less pressure there will be for lawmakers to step in. The BITAG report’s recommendations are a good start.
Even better: Don’t disable a user’s garage door opener just because you don’t like their attitude.