APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers
文献类型:会议论文
作者 | Jiahao, Lu1,2![]() ![]() ![]() ![]() |
出版日期 | 2022-03 |
会议日期 | 2022-6 |
会议地点 | New Orleans, Louisiana, USA |
关键词 | Trustworthy AI Privacy-preserving machine learning |
英文摘要 | Federated learning frameworks typically require collaborators to share their local gradient updates of a common model instead of sharing training data to preserve privacy. However, prior works on Gradient Leakage Attacks showed that private training data can be revealed from gradients. So far almost all relevant works base their attacks on fully-connected or convolutional neural networks. Given the recent overwhelmingly rising trend of adapting Transformers to solve multifarious vision tasks, it is highly important to investigate the privacy risk of vision transformers. In this paper, we analyse the gradient leakage risk of self-attention based mechanism in both theoretical and practical manners. Particularly, we propose APRIL - Attention PRIvacy Leakage, which poses a strong threat to self-attention inspired models such as ViT. Showing how vision Transformers are at the risk of privacy leakage via gradients, we urge the significance of designing privacy-safer Transformer models and defending schemes. |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/48881] ![]() |
专题 | 类脑芯片与系统研究 |
通讯作者 | Jian Cheng |
作者单位 | 1.School of Artificial Intelligence, University of Chinese Academy of Sciences 2.Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Jiahao, Lu,Xi Sheryl, Zhang,Tianli, Zhao,et al. APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers[C]. 见:. New Orleans, Louisiana, USA. 2022-6. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。