中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Rethinking Label Flipping Attack: From Sample Masking to Sample Thresholding

文献类型:期刊论文

AuthorXu, Qianqian2; Yang, Zhiyong3; Zhao, Yunrui3; Cao, Xiaochun4; Huang, Qingming1,5,6,7
SourceIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Issued Date2023-06-01
Volume45Issue:6Pages:7668-7685
KeywordData models Training data Training Deep learning Predictive models Testing Optimization Label flipping attack machine learning
ISSN0162-8828
DOI10.1109/TPAMI.2022.3220849
English AbstractNowadays, machine learning (ML) and deep learning (DL) methods have become fundamental building blocks for a wide range of AI applications. The popularity of these methods also makes them widely exposed to malicious attacks, which may cause severe security concerns. To understand the security properties of the ML/DL methods, researchers have recently started to turn their focus to adversarial attack algorithms that could successfully corrupt the model or clean data owned by the victim with imperceptible perturbations. In this paper, we study the Label Flipping Attack (LFA) problem, where the attacker expects to corrupt an ML/DL model's performance by flipping a small fraction of the labels in the training data. Prior art along this direction adopts combinatorial optimization problems, leading to limited scalability toward deep learning models. To this end, we propose a novel minimax problem which provides an efficient reformulation of the sample selection process in LFA. In the new optimization problem, the sample selection operation could be implemented with a single thresholding parameter. This leads to a novel training algorithm called Sample Thresholding. Since the objective function is differentiable and the model complexity does not depend on the sample size, we can apply Sample Thresholding to attack deep learning models. Moreover, since the victim's behavior is not predictable in a poisonous attack setting, we have to employ surrogate models to simulate the true model employed by the victim model. Seeing the problem, we provide a theoretical analysis of such a surrogate paradigm. Specifically, we show that the performance gap between the true model employed by the victim and the surrogate model is small under mild conditions. On top of this paradigm, we extend Sample Thresholding to the crowdsourced ranking task, where labels collected from the annotators are vulnerable to adversarial attacks. Finally, experimental analyses on three real-world datasets speak to the efficacy of our method.
Funding ProjectNational Key R & D Program of China[2018AAA0102000] ; National Natural Science Foundation of China[U21B2038] ; National Natural Science Foundation of China[61931008] ; National Natural Science Foundation of China[62025604] ; National Natural Science Foundation of China[U1936208] ; National Natural Science Foundation of China[6212200758] ; National Natural Science Foundation of China[61976202] ; Fundamental Research Funds for the Central Universities ; Youth Innovation Promotion Association CAS ; Strategic Priority Research Program of Chinese Academy of Sciences[XDB28000000] ; China National Post-doctoral Program for Innovative Talents[BX2021298] ; China Postdoctoral Science Foundation[2022M713101]
WOS Research AreaComputer Science ; Engineering
Language英语
WOS IDWOS:000982475600070
PublisherIEEE COMPUTER SOC
源URL[http://119.78.100.204/handle/2XEOYT63/21225]  
Collection中国科学院计算技术研究所期刊论文_英文
Corresponding AuthorHuang, Qingming
Affiliation1.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 101408, Peoples R China
2.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China
3.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100049, Peoples R China
4.Sun Yat Sen Univ, Sch Cyber Sci & Technol, Shenzhen Campus, Shenzhen 518107, Peoples R China
5.Univ Chinese Acad Sci, Key Lab Big Data Min & Knowledge Management BDKM, Beijing 101408, Peoples R China
6.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China
7.Peng Cheng Lab, Shenzhen 518055, Peoples R China
Recommended Citation
GB/T 7714
Xu, Qianqian,Yang, Zhiyong,Zhao, Yunrui,et al. Rethinking Label Flipping Attack: From Sample Masking to Sample Thresholding[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2023,45(6):7668-7685.
APA Xu, Qianqian,Yang, Zhiyong,Zhao, Yunrui,Cao, Xiaochun,&Huang, Qingming.(2023).Rethinking Label Flipping Attack: From Sample Masking to Sample Thresholding.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,45(6),7668-7685.
MLA Xu, Qianqian,et al."Rethinking Label Flipping Attack: From Sample Masking to Sample Thresholding".IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 45.6(2023):7668-7685.

入库方式: OAI收割

来源:计算技术研究所

浏览0
下载0
收藏0
其他版本

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.