Title

Hunter: HE-Friendly Structured Pruning for Efficient Privacy-Preserving Deep Learning

Document Type

Conference Paper

Publication Date

2022

DOI

10.1145/3488932.3517401

Publication Title

ASIA CCS '22: Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security

Pages

931-945

Conference Name

ASIA CCS '22 ACM Asia Conference on Computer and Communications Security, 30 May-3 June 2022

Abstract

In order to protect user privacy in Machine Learning as a Service (MLaaS), a series of ingeniously designed privacy-preserving frameworks have been proposed. The state-of-the-art approaches adopt Homomorphic Encryption (HE) for linear function and Garbled Circuits (GC)/Oblivious Transfer (OT) for nonlinear operation to improve computation efficiency. Despite the encouraging progress, the computation cost is still too high for practical applications. This work represents the first step to effectively prune privacy-preserving deep learning models to reduce computation complexity. Although model pruning has been discussed extensively in the machine learning community, directly applying the plaintext model pruning schemes offers little help to reduce the computation in privacy-preserving models. In this paper we propose Hunter, a structured pruning method that identifies three novel HE-friendly structures, i.e., internal structure, external structure, and weight diagonal to guide the pruning process. Hunter outputs a pruned model that, without any loss in model accuracy, achieves a significant reduction in HE operations (and thus the overall computation cost) in the privacy-preserving MLaaS. We apply Hunter in various deep learning models, e.g., AlexNet, VGG and ResNet over classic datasets including MNIST, CIFAR-10 and ImageNet. The experimental results demonstrate that, without accuracy loss, Hunter efficiently prunes the original networks to reduce the HE Perm, Mult, and Add operations. For example, in the state-of-the-art VGG-16 on ImageNet with 10 chosen classes, the total number of Perm is reduced to as low as 2% of the original network, and at the same time, Mult and Add are reduced to only 14%, enabling a significantly more computation-efficient privacy-preserving MLaaS.

Comments

© 2022 Copyright held by the owner/authors. Publication rights licensed to ACM.

Original Publication Citation

Cai, Y., Zhang, Q., Ning, R., Xin, C., & Wu, H. (2022). Hunter: HE-friendly structured pruning for efficient privacy-preserving deep learning. In ASIA CCS '22: Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security (pp. 931-945). Association for Computing Machinery. https://doi.org/10.1145/3488932.3517401

Share

COinS