中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training

文献类型:会议论文

作者Tian Hu2,3; Ye Bowei1; Zheng Xiaolong2,3; Zhang Xingwei2,3; Wu Dash Desheng4
出版日期2021-04
会议日期2020-12-3
会议地点上海
英文摘要

Graph neural networks (GNNs) are powerful tools for analyzing graph-structured data. However, recent studies have shown that GNNs are vulnerable to small but intentional perturbations of input features and graph structures in the node classification task. Existing researches focus on enhancing the robustness of GNNs for a single type of perturbation such as graph structure perturbation or node feature perturbation. An ideal graph neural networks model should be able to resist the two kinds of perturbations. For this purpose, we propose a new adversarial training method for graph-structured data named Graph Jointly Adversarial
Training (GJAT) which incorporates Graph Structure Adversarial Training (GSAT) and Graph Feature Adversarial Training (GFAT) two components and can resist perturbations from the topological structure and node attribute. Extensive experimental results demonstrate that our
proposed method combining two kinds of adversarial training strategies can effectively improve the robustness of graph convolutional networks (GCNs) which is an important subset of GNNs.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/52319]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_互联网大数据与安全信息学研究中心
通讯作者Zheng Xiaolong
作者单位1.University of Illinois in Urbana-Champaign
2.中国科学院自动化研究所
3.中国科学院大学人工智能学院
4.中国科学院大学
推荐引用方式
GB/T 7714
Tian Hu,Ye Bowei,Zheng Xiaolong,et al. Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training[C]. 见:. 上海. 2020-12-3.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。