Abstract
Executing deep neural networks for inference on the server-class or cloud backend based on the data generated at the edge of the Internet of Things is desirable due primarily to the limited compute power of the edge devices and the need to protect the confidentiality of the inference neural networks. However, such a remote inference scheme incurs concerns regarding the privacy of the inference data transmitted by the edge devices to the curious backend. This article presents a lightweight and unobtrusive approach to obfuscate the inference data at the edge devices. It is lightweight in that the edge device only needs to execute a small-scale neural network; it is unobtrusive in that the edge device does not need to indicate whether obfuscation is applied. Extensive evaluation by three case studies of free-spoken digit recognition, handwritten digit recognition, and American sign language recognition shows that our approach effectively protects the confidentiality of the raw forms of the inference data while effectively preserving backend's inference accuracy.
Original language | English |
---|---|
Article number | 9046832 |
Pages (from-to) | 9540-9551 |
Number of pages | 12 |
Journal | IEEE Internet of Things Journal |
Volume | 7 |
Issue number | 10 |
DOIs | |
Publication status | Published - Oct 2020 |
Externally published | Yes |
Keywords
- Data obfuscation
- deep neural networks
- edge computing
- Internet of Things (IoT)
- privacy