中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Boosting Decision-based Black-box Adversarial Attacks with Random Sign Flip

文献类型:会议论文

作者Weilun, Chen4,5; Zhaoxiang, Zhang1,4,5; Xiaolin, Hu3; Baoyuan, Wu2,6
出版日期2020-08
会议日期2020-8
会议地点UK
英文摘要

Decision-based black-box adversarial attacks (decision-based attack) pose a severe threat to current deep neural networks, as they only need the predicted label of the target model to craft adversarial examples. However, existing decision-based attacks perform poorly on the $ l_\infty $ setting and the required enormous queries cast a shadow over the practicality. In this paper, we show that just randomly flipping the signs of a small number of entries in adversarial perturbations can significantly boost the attack performance. We name this simple and highly efficient decision-based $ l_\infty $ attack as Sign Flip Attack. Extensive experiments on CIFAR-10 and ImageNet show that the proposed method outperforms existing decision-based attacks by large margins and can serve as a strong baseline to evaluate the robustness of defensive models. We further demonstrate the applicability of the proposed method on real-world systems.

源URL[http://ir.ia.ac.cn/handle/173211/44323]  
专题自动化研究所_智能感知与计算研究中心
通讯作者Zhaoxiang, Zhang
作者单位1.Tsinghua University
2.Tencent AI Lab
3.The Chinese University of Hong Kong, Shenzhen
4.Center for Research on Intelligent Perception and Computing (CRIPAC), National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA)
5.Center for Excellence in Brain Science and Intelligence Technology, CAS
6.School of Artificial Intelligence, University of Chinese Academy of Sciences (UCAS)
推荐引用方式
GB/T 7714
Weilun, Chen,Zhaoxiang, Zhang,Xiaolin, Hu,et al. Boosting Decision-based Black-box Adversarial Attacks with Random Sign Flip[C]. 见:. UK. 2020-8.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。