Adaptive Perturbation for Adversarial Attack
文献类型:期刊论文
作者 | Yuan, Zheng1,2; Zhang, Jie1,2; Jiang, Zhaoyan3; Li, Liangliang; Shan, Shiguang2 |
刊名 | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
![]() |
出版日期 | 2024-08-01 |
卷号 | 46期号:8页码:5663-5676 |
关键词 | Perturbation methods Iterative methods Adaptation models Generators Closed box Security Training Adversarial attack transfer-based attack adversarial example adaptive perturbation |
ISSN号 | 0162-8828 |
DOI | 10.1109/TPAMI.2024.3367773 |
英文摘要 | In recent years, the security of deep learning models achieves more and more attentions with the rapid development of neural networks, which are vulnerable to adversarial examples. Almost all existing gradient-based attack methods use the sign function in the generation to meet the requirement of perturbation budget on L-infinity norm. However, we find that the sign function may be improper for generating adversarial examples since it modifies the exact gradient direction. Instead of using the sign function, we propose to directly utilize the exact gradient direction with a scaling factor for generating adversarial perturbations, which improves the attack success rates of adversarial examples even with fewer perturbations. At the same time, we also theoretically prove that this method can achieve better black-box transferability. Moreover, considering that the best scaling factor varies across different images, we propose an adaptive scaling factor generator to seek an appropriate scaling factor for each image, which avoids the computational cost for manually searching the scaling factor. Our method can be integrated with almost all existing gradient-based attack methods to further improve their attack success rates. Extensive experiments on the CIFAR10 and ImageNet datasets show that our method exhibits higher transferability and outperforms the state-of-the-art methods. |
资助项目 | National Key R&D Program of China[2021YFC3310100] ; National Natural Science Foundation of China[62176251] ; Beijing Nova Program[20230484368] ; Youth Innovation Promotion Association CAS |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:001262841000014 |
出版者 | IEEE COMPUTER SOC |
源URL | [http://119.78.100.204/handle/2XEOYT63/39844] ![]() |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Zhang, Jie |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China 2.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100049, Peoples R China 3.Tencent, Shenzhen 518057, Peoples R China |
推荐引用方式 GB/T 7714 | Yuan, Zheng,Zhang, Jie,Jiang, Zhaoyan,et al. Adaptive Perturbation for Adversarial Attack[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2024,46(8):5663-5676. |
APA | Yuan, Zheng,Zhang, Jie,Jiang, Zhaoyan,Li, Liangliang,&Shan, Shiguang.(2024).Adaptive Perturbation for Adversarial Attack.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,46(8),5663-5676. |
MLA | Yuan, Zheng,et al."Adaptive Perturbation for Adversarial Attack".IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 46.8(2024):5663-5676. |
入库方式: OAI收割
来源:计算技术研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。