中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Are You Confident That You Have Successfully Generated Adversarial Examples?

文献类型:期刊论文

作者Wang, Bo1; Zhao, Mengnan1; Wang, Wei2; Wei, Fei3; Qin, Zhan4,5; Ren, Kui4,5
刊名IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
出版日期2021-06-01
卷号31期号:6页码:2089-2099
ISSN号1051-8215
关键词Perturbation methods Iterative methods Computational modeling Neural networks Security Training Robustness Deep neural networks adversarial examples structural black box buffer
DOI10.1109/TCSVT.2020.3017006
通讯作者Wang, Wei(wei.wong@ia.ac.cn)
英文摘要Deep neural networks (DNNs) have seen extensive studies on image recognition and classification, image segmentation, and related topics. However, recent studies show that DNNs are vulnerable in defending adversarial examples. The classification network can be deceived by adding a small amount of perturbation to clean samples. There are challenges when researchers want to design a general approach to defend against a wide variety of adversarial examples. To solve this problem, we introduce a defensive method to prevent adversarial examples from generating. Instead of designing a stronger classifier, we built a more robust classification system that can be viewed as a structural black box. After adding a buffer to the classification system, attackers can be efficiently deceived. The real evaluation results of the generated adversarial examples are often contrary to what the attacker thinks. Additionally, we do not assume a specific attack method premise. This incognizance to underlying attacks demonstrates the generalizability of the buffer to potential adversarial attacks. Extensive experiments indicate that the defense method greatly improves the security performance of DNNs.
资助项目National Natural Science Foundation of China[U1936117] ; National Natural Science Foundation of China[U1736119] ; National Natural Science Foundation of China[61972395] ; National Natural Science Foundation of China[61772111]
WOS研究方向Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000658365800001
资助机构National Natural Science Foundation of China
源URL[http://ir.ia.ac.cn/handle/173211/45364]  
专题自动化研究所_智能感知与计算研究中心
通讯作者Wang, Wei
作者单位1.Dalian Univ Technol, Sch Informat & Commun Engn, Dalian 116024, Peoples R China
2.Chinese Acad Sci, Ctr Res Intelligent Percept & Comp, Inst Automat, Beijing 100190, Peoples R China
3.State Univ New York SUNY Buffalo, Dept Elect Engn, Buffalo, NY 14200 USA
4.Zhejiang Univ, Coll Comp Sci, Hangzhou 310000, Peoples R China
5.Zhejiang Univ, Inst Cyberspace Res ICSR, Hangzhou 310000, Peoples R China
推荐引用方式
GB/T 7714
Wang, Bo,Zhao, Mengnan,Wang, Wei,et al. Are You Confident That You Have Successfully Generated Adversarial Examples?[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2021,31(6):2089-2099.
APA Wang, Bo,Zhao, Mengnan,Wang, Wei,Wei, Fei,Qin, Zhan,&Ren, Kui.(2021).Are You Confident That You Have Successfully Generated Adversarial Examples?.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,31(6),2089-2099.
MLA Wang, Bo,et al."Are You Confident That You Have Successfully Generated Adversarial Examples?".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 31.6(2021):2089-2099.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。