中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Multi-Node Acceleration for Large-Scale GCNs

文献类型:期刊论文

作者Sun, Gongjian2,3; Yan, Mingyu2,3; Wang, Duo2,3; Li, Han2,3; Li, Wenming2,3; Ye, Xiaochun2,3; Fan, Dongrui2,3; Xie, Yuan1
刊名IEEE TRANSACTIONS ON COMPUTERS
出版日期2022-12-01
卷号71期号:12页码:3140-3152
ISSN号0018-9340
关键词Deep learning graph neural network hardware accelerator multi-node system communication optimization
DOI10.1109/TC.2022.3207127
英文摘要Limited by the memory capacity and computation power, singe-node graph convolutional neural network (GCN) accelerators cannot complete the execution of GCNs within a reasonable amount of time, due to the explosive size of graphs nowadays. Thus, large-scale GCNs call for a multi-node acceleration system (MultiAccSys) like tensor processing unit (TPU) Pod for large-scale neural network. In this work, we aim to scale up single-node GCN accelerator to accelerate GCNs on large-scale graphs. We first identify the communication pattern and challenges of multi-node acceleration for GCNs on large-scale graphs. We observe that (1) irregular coarse-grained communication patterns exist in the execution of GCNs in MultiAccSys, which introduces massive amount of redundant network transmissions and off-chip memory accesses; (2) the acceleration of GCNs in MultiAccSys is mainly bounded by network bandwidth but tolerates network latency. Guided by the above observations, we then propose MultiGCN, an efficient MultiAccSys for large-scale GCNs that trades network latency for network bandwidth. Specifically, by leveraging the network latency tolerance, we first propose a topology-aware multicast mechanism with a one put per multicast message-passing model to reduce transmissions and alleviate network bandwidth requirements. Second, we introduce a scatter-based round execution mechanism which cooperates with the multicast mechanism and reduces redundant off-chip memory accesses. Compared to the baseline MultiAccSys, MultiGCN achieves 4 & SIM; 12x speedup using only 28%$\sim$& SIM;68% energy, while reducing 32% transmissions and 73% off-chip memory accesses on average. Besides, MultiGCN not only achieves 2.5 & SIM; 8x speedup over the state-of-the-art multi-GPU solution, but also scales to large-scale graph as opposed to single-node GCN accelerators.
资助项目National Natural Science Foundation of China[61732018] ; National Natural Science Foundation of China[61872335] ; National Natural Science Foundation of China[62202451] ; Austrian-Chinese Cooperative RD Project[171111KYSB20200002] ; CAS Project for Young Scientists in Basic Research[YSBR-029] ; Open Research Projects of Zhejiang Lab[2022PB0AB01] ; CAS Project for Youth Innovation Promotion Association
WOS研究方向Computer Science ; Engineering
语种英语
出版者IEEE COMPUTER SOC
WOS记录号WOS:000886309300007
源URL[http://119.78.100.204/handle/2XEOYT63/20320]  
专题中国科学院计算技术研究所期刊论文
通讯作者Sun, Gongjian
作者单位1.Univ Calif Santa Barbara, Santa Barbara, CA 93106 USA
2.Univ Chinese Acad Sci, Beijing 101408, Peoples R China
3.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processors, Beijing 100045, Peoples R China
推荐引用方式
GB/T 7714
Sun, Gongjian,Yan, Mingyu,Wang, Duo,et al. Multi-Node Acceleration for Large-Scale GCNs[J]. IEEE TRANSACTIONS ON COMPUTERS,2022,71(12):3140-3152.
APA Sun, Gongjian.,Yan, Mingyu.,Wang, Duo.,Li, Han.,Li, Wenming.,...&Xie, Yuan.(2022).Multi-Node Acceleration for Large-Scale GCNs.IEEE TRANSACTIONS ON COMPUTERS,71(12),3140-3152.
MLA Sun, Gongjian,et al."Multi-Node Acceleration for Large-Scale GCNs".IEEE TRANSACTIONS ON COMPUTERS 71.12(2022):3140-3152.

入库方式: OAI收割

来源:计算技术研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。