Date of Award

Summer 8-2020

Document Type


Degree Name

Doctor of Philosophy (PhD)


Electrical & Computer Engineering


Electrical & Computer Engineering

Committee Director

Hongyi Wu

Committee Member

Chunsheng Xin

Committee Member

Cong Wang

Committee Member

Jiang Li


Mobile devices are becoming smarter to satisfy modern user's increasing needs better, which is achieved by equipping divers of sensors and integrating the most cutting-edge Deep Learning (DL) techniques. As a sophisticated system, it is often vulnerable to multiple attacks (side-channel attacks, neural backdoor, etc.). This dissertation proposes solutions to maintain the cyber-hygiene of the DL-Based smartphone system by exploring possible vulnerabilities and developing countermeasures.

First, I actively explore possible vulnerabilities on the DL-Based smartphone system to develop proactive defense mechanisms. I discover a new side-channel attack on smartphones using the unrestricted magnetic sensor data. I demonstrate that attackers can effectively infer the Apps being used on a smartphone with an accuracy of over 80%, through training a deep Convolutional Neural Networks (CNN). Various signal processing strategies have been studied for feature extractions, including a tempogram based scheme. Moreover, by further exploiting the unrestricted motion sensor to cluster magnetometer data, the sniffing accuracy can increase to as high as 98%. To mitigate such attacks, I propose a noise injection scheme that can effectively reduce the App sniffing accuracy to only 15% and, at the same time, has a negligible effect on benign Apps.

On the other hand, I leverage the DL technique to build reactive malware detection schemes. I propose an innovative approach, named CapJack, to detect in-browser malicious cryptocurrency mining activities by using the latest CapsNet technology. To the best of our knowledge, this is the first work to introduce CapsNet to the field of malware detection through system-behavioural analysis. It is particularly useful to detect malicious miners under multitasking environments where multiple applications run simultaneously.

Finally, as DL itself is vulnerable to model-based attacks, I proactively explore possible attacks against the DL model. To this end, I discover a new clean label attack, named Invisible Poison, which stealthily and aggressively plants a backdoor in neural networks (NN). It converts a trigger to noise concealed inside regular images for training NN, to plant a backdoor that can be later activated by the trigger. The attack has the following distinct properties. First, it is a black-box attack, requiring zero-knowledge about the target NN model. Second, it employs \invisible poison" to achieve stealthiness where the trigger is disguised as \noise" that is therefore invisible to human, but at the same time, still remains significant in the feature space and thus is highly effective to poison training data.


In Copyright. URI: This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).