IoT security: will today’s methods work?
Researchers from the University of Michigan and Stony Brook University have published a paper calling for new security mechanisms for the internet of things. The authors say that some security challenges presented by the IoT can be solved using traditional approaches, and some require new paradigms.
The researchers analyze security at four different layers of an embedded system: the hardware layer, the system software layer, the network layer and the application layer.
For the most part, hardware security measures can transfer from the IT world to the IoT world, according to the researchers, but implementing these measures may be harder in an IoT environment. The researchers expect “hardware security research results developed in the context of desktop, mobile and cloud systems to transfer in some form to IoT systems. … We envision
that many challenges will arise in applying these notions of hardware security to IoT systems due to their limited computational and energy constraints.”
The systems software layer in IoT systems includes firmware, operating system software, privileged system applications and programming frameworks. The paper points out that security software updates can be more difficult for IoT systems than for IT systems, because IoT software is often embedded in devices that are in hard-to-reach places.
In discussing the network layer, the researchers said that the diversity of connectivity protocols could pose some security challenges, particularly the use of Wi-Fi routers as smart home hubs. On the positive side, anomaly detection may be easier for IoT devices than for IT devices.
“IoT devices are simple and have a single purpose,” the researchers wrote. “This can translate to simpler network dynamics, and hence easier to model behaviors ultimately leading to a lower number of errors in anomaly detectors. Recent work in the context of industrial control systems show promising results.”
The application layer of IoT systems is perhaps the most vulnerable to security breaches. The researchers said deep learning algorithms are particularly at risk.
“Attackers can craft inputs that look indistinguishable from benign inputs to humans, but can be interpreted in a completely different way by machines,” the researchers wrote. “For example, tampered images that are fed into a vision algorithm running on an autonomous vehicle can make the vehicle believe a stop sign was a yield sign, causing a possible crash at an intersection.”