DPNAS: Neural Architecture Search for Deep Learning with Differential Privacy
文献类型:会议论文
作者 | Cheng AD(程安达)1,4![]() ![]() ![]() ![]() ![]() |
出版日期 | 2022-06 |
会议日期 | 2022-2 |
会议地点 | 线上 |
英文摘要 | Training deep neural networks (DNNs) for meaningful differential privacy (DP) guarantees severely degrades model utility. In this paper, we demonstrate that the architecture of DNNs has a significant impact on model utility in the context of private deep learning, whereas its effect is largely unexplored in previous studies. In light of this missing, we propose the very first framework that employs neural architecture search to automatic model design for private deep learning, dubbed as DPNAS. To integrate private learning with architecture search, we delicately design a novel search space and propose a DP-aware method for training candidate models. We empirically certify the effectiveness of the proposed framework. The searched model DPNASNet achieves state-of-theart privacy/utility trade-offs, e.g., for the privacy budget of ( , δ) = (3, 1 × 10−5 ), our model obtains test accuracy of 98.57% on MNIST, 88.09% on FashionMNIST, and 68.33% on CIFAR-10. Furthermore, by studying the generated architectures, we provide several intriguing findings of designing private-learning-friendly DNNs, which can shed new light on model design for deep learning with differential privacy |
源URL | [http://ir.ia.ac.cn/handle/173211/51893] ![]() |
专题 | 类脑芯片与系统研究 |
通讯作者 | Cheng J(程健) |
作者单位 | 1.中科院自动化所 2.中科南京人工智能创新研究院 3.京东 4.中国科学院大学 |
推荐引用方式 GB/T 7714 | Cheng AD,Wang JX,Zhang X,et al. DPNAS: Neural Architecture Search for Deep Learning with Differential Privacy[C]. 见:. 线上. 2022-2. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。