Towards Interpretable Defense Against Adversarial Attacks via Causal Inference
文献类型:期刊论文
作者 | Min Ren1,2![]() ![]() ![]() |
刊名 | Machine Intelligence Research
![]() |
出版日期 | 2022 |
卷号 | 19期号:3页码:209-226 |
关键词 | Adversarial sample adversarial defense causal inference interpretable machine learning transformers |
ISSN号 | 2731-538X |
DOI | 10.1007/s11633-022-1330-7 |
英文摘要 | Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and efficient defense mechanisms against adversarial attacks. Most of the existing methods are just stopgaps for specific adversarial samples. The main obstacle is that how adversarial samples fool the deep learning models is still unclear. The underlying working mechanism of adversarial samples has not been well explored, and it is the bottleneck of adversarial attack defense. In this paper, we build a causal model to interpret the generation and performance of adversarial samples. The self-attention/transformer is adopted as a powerful tool in this causal model. Compared to existing methods, causality enables us to analyze adversarial samples more naturally and intrinsically. Based on this causal model, the working mechanism of adversarial samples is revealed, and instructive analysis is provided. Then, we propose simple and effective adversarial sample detection and recognition methods according to the revealed working mechanism. The causal insights enable us to detect and re[1]cognize adversarial samples without any extra model or training. Extensive experiments are conducted to demonstrate the effectiveness of the proposed methods. Our methods outperform the state-of-the-art defense methods under various adversarial attacks. |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/55942] ![]() |
专题 | 自动化研究所_学术期刊_International Journal of Automation and Computing |
作者单位 | 1.University of Chinese Academy of Sciences, Beijing 100190, China 2.Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China 3.Laboratory of Visual Computing and Intelligent System, Beijing University of Posts and Telecommunications, Beijing 100876, China |
推荐引用方式 GB/T 7714 | Min Ren,Yun-Long Wang,Zhao-Feng He. Towards Interpretable Defense Against Adversarial Attacks via Causal Inference[J]. Machine Intelligence Research,2022,19(3):209-226. |
APA | Min Ren,Yun-Long Wang,&Zhao-Feng He.(2022).Towards Interpretable Defense Against Adversarial Attacks via Causal Inference.Machine Intelligence Research,19(3),209-226. |
MLA | Min Ren,et al."Towards Interpretable Defense Against Adversarial Attacks via Causal Inference".Machine Intelligence Research 19.3(2022):209-226. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。