中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Differentially Private Federated Learning with Local Regularization and Sparsification

文献类型:会议论文

作者Cheng AD(程安达)1,2; Wang PS(王培松)1; Zhang X(张希)1; Cheng J(程健)1,2
出版日期2022-06
会议日期2022-6
会议地点线上
英文摘要

User-level differential privacy (DP) provides certifiable privacy guarantees to the information that is specific to any user’s data in federated learning. Existing methods that ensure user-level DP come at the cost of severe accuracy decrease. In this paper, we study the cause of model performance degradation in federated learning with userlevel DP guarantee. We find the key to solving this issue is to naturally restrict the norm of local updates before executing operations that guarantee DP. To this end, we propose two techniques, Bounded Local Update Regularization and Local Update Sparsification, to increase model quality without sacrificing privacy. We provide theoretical analysis on the convergence of our framework and give rigorous privacy guarantees. Extensive experiments show that our framework significantly improves the privacy-utility tradeoff over the state-of-the-arts for federated learning with user-level DP guarantee.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/51894]  
专题类脑芯片与系统研究
通讯作者Cheng J(程健)
作者单位1.中科院自动化所
2.中国科学院大学
推荐引用方式
GB/T 7714
Cheng AD,Wang PS,Zhang X,et al. Differentially Private Federated Learning with Local Regularization and Sparsification[C]. 见:. 线上. 2022-6.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。