Security Analysis of Emerging Smart Home Applications
An analysis focused on security design of IoT platforms, findings include overprivilege and insufficient event protection
Summary

We performed the first in-depth empirical security analysis of a popular emerging smart home programming platform--- Samsung SmartThings. We evaluated the platform's security design, and coupled that with an analysis of 499 SmartThings apps (also called SmartApps) and 132 device handlers using static code analysis tools that we built.

FAQ

What are your key findings?
Our key findings are twofold. First, although SmartThings implements a privilege separation model, we found that SmartApps can be overprivileged. That is, SmartApps can gain access to more operations on devices than their functionality requires. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock pincodes.

Why SmartThings?
Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. We analyzed Samsung-owned SmartThings because it has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks.

Can you explain overprivilege, and what you found specifically for SmartThings?
Overprivilege is a security design flaw wherein an app gains access to more operations on protected resources than it requires to complete its claimed functionality. For instance, a battery manager app only needs access to read battery levels of devices. However, if this app can also issue operations to control the on/off status of those devices, that would be overprivilege. We found two forms of overprivilege for SmartThings. First, coarse-grained capabilities lead to over 55% of existing SmartApps to be overprivileged. Second, coarse SmartApp-SmartDevice binding leads to SmartApps gaining access to operations they did not explicitly ask for. Our analysis reveals that 42% of existing SmartApps are overprivileged in this way.

How can attackers exploit these design flaws?
We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes; (2) stole existing door lock codes; (3) disabled vacation mode of the home; and (4) induced a fake fire alarm. Details on how these attacks work are in our research paper linked below.

Code & Tools

We have made three programming resources available on GitHub:

  • Static analysis tool that computes overprivilege in SmartApps.
  • Python script that automatically creates skeleton device handlers inside the SmartThings IDE.
  • Capability documentation that we used in our analysis.
  • Research Paper
    Our paper appeared at IEEE S&P 2016 ("Oakland") and received Distinguished Practical Paper Award.

    when refering to out work, please cite it as:

    Earlence Fernandes, Jaeyeon Jung, and Atul Prakash
    Security Analysis of Emerging Smart Home Applications
    In Proceedings of 37th IEEE Symposium on Security and Privacy, May 2016    
                                    
    or, use BitTex for citation:
    @InProceedings{smartthings16,
        author = {Earlence Fernandes and Jaeyeon Jung and Atul Prakash},
        title = {{S}ecurity {A}nalysis of {E}merging {S}mart {H}ome {A}pplications},
        booktitle = {Proceedings of the 37th {IEEE} Symposium on Security and Privacy},
        month = May,
        year = 2016
    }
                                   

    Attack Demos

    Pincode Snooping

    Backdoor Pincode Injection

    Disabling Vacation Mode

    Fake Fire Alarm

    Media Coverage
    Wired, Schneier on Security, The Verge, Gizmodo, Ars Technica, CNET, Mashable, Detroit Free Press, ZDNet, Yahoo News, Tech Times, Reddit, NDTV, SC Magazine, TechHive, WorldTechToday, Popular Mechanics, GearBrain, Phys.org, 9to5google.com, NetworkWorld, mobilesyrup, myce, BestTheNews, Android Headlines, CityNewsLine, NewsAbout.com, Top Tech News, News Factor, SANS ISC InfoSec , Sammobile, The Inquirer, Live Smart, Mobile Scout, Michigan Engineering, Digital Trends, TechDirt, TecHomeBuilder, ABCNews, Business Insider, E&T, Neowin, Business Standard, Security Sales, eWeek, Softpedia, HotHardware, TechSpot, Morning News USA, Digital Spy, Betanews, IoTHub, hiddenwires, The Stack, Tech News World , Security Week, International Business Times, The Register, SANS Institute, Tech Republic

    Radio Coverage WWJ Newsradio 950, Hacked! The Charles Tendell Show (Live)

    Here is an article for "The Conversation" that explains our research findings to the general reader.

    Vendor Statement

    Alex Hawkinson, Founder, CEO of SmartThings

    Acknowledgements

    University of Michigan
    Microsoft Research
    National Science Foundation

    Robust Physical-World Attacks on Deep Learning Visual Classification
    Can real physical objects be manipulated in ways that cause DNN-based classifiers to misclassify them?
    Summary

    Although deep neural networks (DNNs) perform well in a variety of applications, they are vulnerable to adversarial examples resulting from small-magnitude perturbations added to the input data. Inputs modified in this way can be mislabeled as a target class in targeted attacks or as a random class different from the ground truth in untargeted attacks. However, recent studies have demonstrated that such adversarial examples have limited effectiveness in the physical world due to changing physical conditions—they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper, we propose a general attack algorithm—Robust Physical Perturbations (RP2)— that takes into account the numerous physical conditions and produces robust adversarial perturbations. Using a real-world example of road sign recognition, we show that adversarial examples generated using RP2 achieve high attack success rates in the physical world under a variety of conditions, including different viewpoints. Furthermore, to the best of our knowledge, there is currently no standardized way to evaluate physical adversarial perturbations. Therefore, we propose a two-stage evaluation methodology and tailor it to the road sign recognition use case. Our methodology captures a range of diverse physical conditions, including those encountered when images are captured from moving vehicles. We evaluate our physical attacks using this methodology and effectively fool two road sign classifiers. Using a perturbation in the shape of black and white stickers, we attack a real Stop sign, causing targeted misclassification in 100% of the images obtained in controlled lab settings and above 84% of the captured video frames obtained on a moving vehicle for one of the classifiers we attack.

    FAQ

    Did you attack a real self-driving car?
    No.

    Okay, what did you attack?
    We attacked a deep neural network-based classifier for U.S. road signs. A classifier is a neural network (in the context of our work) that interprets road signs. A car would potentially use a camera to take pictures of road signs, crop them, and then feed them into a road sign classifier. We did not attack object detectors -- a different type of machine learning model that analyzes an image of the entire scene and detects the signs and their labels without cropping. Object detection is a very different machine learning problem and presents different challenges for attackers.

    To the best of our knowledge, there is currently no publicly available classifier for U.S. road signs. Therefore, we trained a network on the LISA dataset, a U.S. sign dataset comprised of different road signs like Stop, Speed Limit, Yield, Right Turn, Left Turn, etc. This model consists of three convolutional layers followed by a fully connected layer and was originally developed as part of the Cleverhans library. Our final classifier accuracy was 91% on the test dataset.

    What are your findings?
    We show that it is possible to construct physical modifications to road signs, in ways that cause the trained classifier (discussed above) to misinterpret the meaning of the signs. For example, we were able to trick the classifier into interpreting a Stop sign as a Speed Limit 45 sign, and a Turn Right sign as either a Stop or Added Lane sign. Our physical modifications for a real Stop sign are a set of black and white stickers. See the resources section below for examples.

    What resources does an attacker need?
    An attacker needs a color printer for sticker attacks, and a poster printer for poster-printing attacks. The attacker would also need a camera to take an image of the sign he wishes to attack.

    Who is a casual observer and why do these modifications to road signs not raise suspicion?
    A casual observer is anyone in the street or in vehicles. Our algorithm produces perturbations that look like graffiti. As graffiti is commonly seen on road signs, it is unlikely that casual observers would suspect that anything is amiss.

    Based on this work, are current self-driving cars at risk?
    No. We did not attack a real self-driving car. However, our work does serve to highlight potential issues that future self-driving car algorithms might have to address. A more complete attack on a self-driving car would have to target the entire control pipeline that includes many more steps in addition to classification. One such part of the pipeline, which is out of the scope of our work, is the detection of objects, that is the identification of the region of an image taken by a car camera where some type of road sign is to be found. We focus our efforts on attacking classifiers using physical object modifications. We focus on classifiers because they are commonly studied in the context of doing research on adversarial examples. Although it is unlikely that our attacks on classifiers would attack detectors “out of the box,” it is highly possible that future work will examine and find robust attacks on object detectors, in a similar vein to our work on attacking classifiers.

    Should I stop using the autonomous features (parking, freeway driving) of my car? Or is there any immediate concern?
    We again stress that our attack was crafted for the trained neural network discussed above. As it stands today, this attack would most likely not work as-is on existing self-driving cars.

    By revealing this vulnerability, aren't you helping potential hackers?
    No, on the contrary, we are helping manufacturers and users to address potential problems before hackers can take advantage. As computer security researchers, we are interested in identifying the security risks of emerging technologies, with the goal of helping improve the security of future versions of those technologies.

    The security research community has found that evaluating the security risks of a new developing technology makes it much easier to confront and address security problems before adversarial pressure manifests. One example has been the modern automobile and another, the modern smart home. In both cases, there is progress toward better security. We hope that our results start a fruitful conversation on securing cyber-physical systems that use neural nets for making important control decisions.

    Are you doing demos or interviews?
    As our work is in progress, we are currently focused on improving and fine-tuning the scientific techniques behind our initial results. We created this FAQ in response to the unanticipated media interest and to answer questions that have arisen in the meantime. In the future, we may upload video demonstrations of the attack, and may accept interview invitations. For the time being, we have uploaded our experimental attack images on this website.

    Whom should we contact if we have more questions?
    We are a team of researchers at various institutions. Please see below for a list of team members and institutions involved in the project. In order to streamline communication, we have created an alias that reaches all team members. We strongly recommend that you contact roadsigns@umich.edu if you have further questions.

    Example Drive-By Test Video

    Abstract Art Attack on LISA-CNN

    The left-hand side is a video of a perturbed Stop sign, the right-hand side is a video of a clean Stop sign. The classifier (LISA-CNN) detects the perturbed sign as Speed Limit 45 until the car is very close to the sign. At that point, it is too late for the car to reliably stop. The subtitles show the LISA-CNN classifier output.

    Subtle Poster Attack on LISA-CNN

    The left-hand side is a video of a true-sized Stop sign printout (poster paper) with perturbations covering the entire surface area of the sign. The classifier (LISA-CNN) detects this perturbed sign as a Speed Limit 45 sign in all tested frames. The right-hand side is the baseline (a clean poster-printed Stop sign). The subtitles show LISA-CNN output.

    Code & Tools

    We have made a sampling of our experimental attack images available as a zip file (around 25MB). Click here to download. Google Drive link to datasets we used in our attacks (validation set for attack on a coffee mug, victim set for the coffee mug attack, US stop signs for validation, etc).

    Research Paper

    when refering to out work, please cite it as:

    Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song
    Robust Physical-World Attacks on Deep Learning Visual Classification
    Computer Vision and Pattern Recognition (CVPR 2018) (supersedes arXiv preprint 1707.08945, August 2017)   
                                        
    or, use BitTex for citation:
    @InProceedings{roadsigns17,
       author = {Kevin Eykholt and Ivan Evtimov and Earlence Fernandes and Bo Li and Amir Rahmati and Chaowei Xiao 
        and Atul Prakash and Tadayoshi Kohno and Dawn Song},
       title = {{Robust Physical-World Attacks on Deep Learning Visual Classification}},
       booktitle = {Computer Vision and Pattern Recognition (CVPR)},
       month = June,
       year = 2018
    }                                       

    Acknowledgements

    University of Michigan
    University of Washington
    University of California, Berkeley
    Stony Brook University
    National Science Foundation

    Object Detector Attacks: Physical Adversarial Examples for Object Detectors
    Physical Adversarial Examples for state-of-the-art object detectors
    Summary

    Deep neural networks (DNNs) have enabled great progress in a variety of application areas, including image processing, text analysis, and speech recognition. DNNs are being incorporated as an important component in many cyber-physical systems. For instance, the vision system of a self-driving car can take advantage of DNNs to better recognize pedestrians, vehicles, and road signs. However, recent research has shown that DNNs are vulnerable to adversarial examples: Adding carefully crafted adversarial perturbations to the inputs can mislead the target classifier into mislabeling them during run time. Such adversarial examples raise security and safety concerns when applying DNNs in the real world. For example, adversarially perturbed inputs could mislead the perceptual systems of an autonomous vehicle into misclassifying street signs, with potentially catastrophic consequences. To better understand these vulnerabilities, there has been extensive research on how adversarial examples may affect DNNs deployed in the physical world.

    Our recent work Robust physical-world attacks on deep learning models has shown physical attacks on classifiers. As the next logical step, we show attacks on object detectors. These computer vision algorithms identify relevant objects in a scene and predict bounding boxes indicating objects’ position and kind. Compared with classifiers, detectors are more challenging to fool as they process the entire image and can use contextual information (e.g. the orientation and position of the target object in the scene) in their predictions.

    We demonstrate physical adversarial examples against the YOLO detector, a popular state-of-the-art algorithm with good real-time performance. Our examples take the form of sticker perturbations that we apply to a real STOP sign. The following image shows our example physical adversarial perturbation.

    We also perform dynamic tests by recording a video to test out the detection performance. As can be seen in the video, the YOLO network does not perceive the STOP sign in almost all the frames. If a real autonomous vehicle were driving down the road with such an adversarial STOP sign, it would not see the STOP, possibly leading to a crash at an intersection. The perturbation we created is robust to changing distances and angles -- the most commonly changing factors in a self-driving scenario.

    More interestingly, the physical adversarial examples generated for the YOLO detector are also be able to fool standard Faster-RCNN. The video contains a dynamic test of the physical adversarial example on Faster-RCNN. As this is a black box attack on Faster-RCNN, the attack is not as successful as it is in the YOLO case. This is expected behavior. We believe that with additional techniques (such as ensemble training), the black box attack could be made more effective. Additionally, specially optimizing an attack for Faster-RCNN will yield better results. We are currently working on a paper that explores these attacks in more detail. The image below is an example of Faster-RCNN not perceiving the Stop sign.

    In both cases (YOLO and Faster-RCNN), a stop sign is detected only when the camera is very close to the sign (about 3 to 4 feet away). In real settings, this distance is too close for a vehicle to take effective corrective action. Stay tuned for our upcoming paper that contains more details about the algorithm and results of physical perturbations against state-of-the-art object detectors.

    Attack Demos

    Physical Adversarial Sticker Perturbations for YOLO

    Physical Adversarial Examples for YOLO (2)

    Black box transfer to Faster RCNN of physical adversarial examples generated for YOLO

    Short Note on arXiv
    Acknowledgements

    University of Michigan
    University of Washington
    University of California, Berkeley
    Stanford University
    Stony Brook University
    National Science Foundation

    FlowFence: Practical Data Protection for Emerging IoT Application Frameworks
    An information flow control (IFC) system for IoT apps
    Summary

    Emerging IoT programming frameworks enable building apps that compute on sensitive data produced by smart homes and wearables. However, these frameworks only support permission-based access control on sensitive data, which is ineffective at controlling how apps use data once they gain access. To address this limitation, we present FlowFence, a system that requires consumers of sensitive data to declare their intended dataflow patterns, which it enforces with low overhead, while blocking all other undeclared flows. FlowFence achieves this by explicitly embedding data flows and the related control flows within app structure. Developers use FlowFence support to split their apps into two components: (1) A set of Quarantined Modules that operate on sensitive data in sandboxes, and (2) Code that does not operate on sensitive data but orchestrates execution by chaining Quarantined Modules together via taint-tracked opaque handles—references to data that can only be dereferenced inside sandboxes. We studied three existing IoT frameworks to derive key functionality goals for FlowFence, and we then ported three existing IoT apps. Securing these apps using FlowFence resulted in an average increase in size from 232 lines to 332 lines of source code. Performance results on ported apps indicate that FlowFence is practical: A face-recognition based doorcontroller app incurred a 4.9% latency overhead to recognize a face and unlock a door.

    Code

    We accept pull requests!

    Research Paper

    when refering to out work, please cite it as:

    Earlence Fernandes, Justin Paupore, Amir Rahmati, Daniel Simionato, Mauro Conti, and Atul Prakash 
    FlowFence: Practical Data Protection for Emerging IoT Application Frameworks
    In Proceedings of the 25th USENIX Security Symposium, August 2016   
                                    
    or, use BitTex for citation:
    @InProceedings{flowfence16,
        author = {Earlence Fernandes and Justin Paupore and Amir Rahmati and Daniel Simionato and Mauro Conti and Atul Prakash},
        title = {{F}low{F}ence: {P}ractical {D}ata {P}rotection for {E}merging {I}o{T} {A}pplication {F}rameworks},
        booktitle = {Proceedings of the 25th {USENIX} Security Symposium},
        month = August,
        year = 2016
    } 
                                       

    Acknowledgements

    University of Michigan
    University of Padua
    National Science Foundation

    Decentralized Action Integrity for Trigger-Action IoT Platforms
    Clean-slate design for trigger-action platforms to support decentralized action integrity
    Summary

    Trigger-Action platforms are web-based systems that enable users to create automation rules by stitching together online services representing digital and physical resources using OAuth tokens. Unfortunately, these platforms introduce a longrange large-scale security risk: If they are compromised, an attacker can misuse the OAuth tokens belonging to a large number of users to arbitrarily manipulate their devices and data. We introduce Decentralized Action Integrity, a security principle that prevents an untrusted trigger-action platform from misusing compromised OAuth tokens in ways that are inconsistent with any given user’s set of trigger-action rules. We present the design and evaluation of Decentralized Trigger-Action Platform (DTAP), a trigger-action platform that implements this principle by overcoming practical challenges. DTAP splits currently monolithic platform designs into an untrusted cloud service, and a set of user clients (each user only trusts their client). Our design introduces the concept of Transfer Tokens (XTokens) to practically use fine grained rule-specific tokens without increasing the number of OAuth permission prompts compared to current platforms. Our evaluation indicates that DTAP poses negligible overhead: it adds less than 15ms of latency to rule execution time, and reduces throughput by 2.5%.

    Research Paper

    when refering to out work, please cite it as:

    Earlence Fernandes, Amir Rahmati, Jaeyeon Jung, Atul Prakash 
    Decentralized Action Integrity for Trigger-Action IoT Platforms 
    22nd Network and Distributed Security Symposium (NDSS 2018), San Diego, CA, Feb 2018  
                                    
    or, use BitTex for citation:
    InProceedings{dtap18,
       author = {Earlence Fernandes and Amir Rahmati and Jaeyeon Jung and Atul Prakash},
       title = {{Decentralized Action Integrity for Trigger-Action IoT Platforms}},
       booktitle = {22nd Network and Distributed Security Symposium (NDSS 2018)},
       month = Feb,
       year = 2018
    }                                

    Acknowledgements

    University of Michigan
    National Science Foundation

    ContexIoT: Towards Providing Contextual Integrity to Appified IoT Platforms
    A system that provides contextual permission prompts in SmartThings apps
    Summary

    The Internet-of-Things (IoT) has quickly evolved to a new appified era where third-party developers can write apps for IoT platforms using programming frameworks. Like other appified platforms, e.g., the smartphone platform, the permission system plays an important role in platform security. However, design flaws in current IoT platform permission models have been reported recently, exposing users to significant harm such as break-ins and theft. To solve these problems, a new access control model is needed for both current and future IoT platforms. In this paper, we propose ContexIoT, a context-based permission system for appified IoT platforms that provides contextual integrity by supporting fine-grained context identification for sensitive actions, and runtime prompts with rich context information to help users perform effective access control. Context definition in ContexIoT is at the inter-procedure control and data flow levels, that we show to be more comprehensive than previous context-based permission systems for the smartphone platform. ContexIoT is designed to be backward compatible and thus can be directly adopted by current IoT platforms. We prototype ContexIoT on the Samsung SmartThings platform, with an automatic app patching mechanism developed to support unmodified commodity SmartThings apps. To evaluate the system’s effectiveness, we perform the first extensive study of possible attacks on appified IoT platforms by reproducing reported IoT attacks and constructing new IoT attacks based on smartphone malware classes. We categorize these attacks based on lifecycle and adversary techniques, and build the first taxonomized IoT attack app dataset. Evaluating ContexIoT on this dataset, we find that it can effectively distinguish the attack context for all the tested apps. The performance evaluation on 283 commodity IoT apps shows that the app patching adds nearly negligible delay to the event triggering latency, and the permission request frequency is far below the threshold that is considered to risk user habituation or annoyance.

    Code for Attacks

    Research Paper

    when refering to out work, please cite it as:

    Yunhan Jack Jia, Qi Alfred Chen, Shiqi Wang, Amir Rahmati, Earlence Fernandes, Z. Morley Mao, and Atul Prakash 
    ContexIoT: Towards Providing Contextual Integrity to Appified IoT Platforms
    21st Network and Distributed Security Symposium (NDSS 2017), Feb 2017   
                                        
    or, use BitTex for citation:
    @InProceedings{contexiot17,
        author = {Yunhan Jack Jia and Qi Alfred Chen and Shiqi Wang and Amir Rahmati and Earlence Fernandes and Z. Morley Mao 
            and Atul Prakash},
        title = {{ContexIoT: Towards Providing Contextual Integrity to Appified IoT Platforms}},
        booktitle = {21st Network and Distributed Security Symposium},
        month = February,
        year = 2017
    }
                                           

    Acknowledgements

    University of Michigan
    National Science Foundation
    Office of Naval Research

    Heimdall: A Privacy-Respecting Implicit Preference Collection Framework
    A system that enables privacy-respecting collection of recommendation data from the phone and the built environment
    Summary

    Many of the everyday decisions a user makes rely on the suggestions of online recommendation systems. These systems amass implicit (e.g., location, purchase history, browsing history) and explicit (e.g., reviews, ratings) feedback from multiple users, produce a general consensus, and provide suggestions based on that consensus. However, due to privacy concerns, users are uncomfortable with implicit data collection, thus requiring recommendation systems to be overly dependent on explicit feedback. Unfortunately, users do not frequently provide explicit feedback. This hampers the ability of recommendation systems to provide high-quality suggestions. We introduce Heimdall, the first privacy-respecting implicit preference collection framework that enables recommendation systems to extract user preferences from their activities in a privacy-respecting manner. The key insight is to enable recommendation systems to run a collector on a user’s device and precisely control the information a collector transmits to the recommendation system back-end. Heimdall introduces immutable blobs as a mechanism to guarantee this property. We implemented Heimdall for the smartphone and smart home environments and wrote three example collectors to enhance existing recommendation systems with implicit feedback. Our performance results suggest that the overhead of immutable blobs is minimal, and a user study of 166 participants indicates that privacy concerns are significantly less when collectors record only specific information—a property that Heimdall enables.

    Code

    Coming soon!

    Research Paper

    when refering to out work, please cite it as:

    Amir Rahmati, Earlence Fernandes, Kevin Eykholt, Xinheng Chen, and Atul Prakash 
    Heimdall: A Privacy-Respecting Implicit Preference Collection Framework 
    15th ACM International Conference on Mobile Systems, Applications, and Services (ACM MobiSys 2017), June 2017   
                                        
    or, use BitTex for citation:
    @InProceedings{heimdall17,
       author = {Amir Rahmati and Earlence Fernandes and Kevin Eykholt and Xinheng Chen and Atul Prakash},
       title = {{Heimdall: A Privacy-Respecting Implicit Preference Collection Framework}},
       booktitle = {15th ACM International Conference on Mobile Systems, Applications, and Services},
       month = June,
       year = 2017
    }                                       

    Acknowledgements

    University of Michigan
    National Science Foundation