Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
文献类型:会议论文
作者 | Tian Hu2,3![]() ![]() |
出版日期 | 2021-04 |
会议日期 | 2020-12-3 |
会议地点 | 上海 |
英文摘要 | Graph neural networks (GNNs) are powerful tools for analyzing graph-structured data. However, recent studies have shown that GNNs are vulnerable to small but intentional perturbations of input features and graph structures in the node classification task. Existing researches focus on enhancing the robustness of GNNs for a single type of perturbation such as graph structure perturbation or node feature perturbation. An ideal graph neural networks model should be able to resist the two kinds of perturbations. For this purpose, we propose a new adversarial training method for graph-structured data named Graph Jointly Adversarial |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/52319] ![]() |
专题 | 自动化研究所_复杂系统管理与控制国家重点实验室_互联网大数据与安全信息学研究中心 |
通讯作者 | Zheng Xiaolong |
作者单位 | 1.University of Illinois in Urbana-Champaign 2.中国科学院自动化研究所 3.中国科学院大学人工智能学院 4.中国科学院大学 |
推荐引用方式 GB/T 7714 | Tian Hu,Ye Bowei,Zheng Xiaolong,et al. Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training[C]. 见:. 上海. 2020-12-3. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。