Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks
文献类型:期刊论文
作者 | Wu, Jingya1,2; Lu, Wenyan1,2; Yan, Guihai1,2; Li, Jiajun1,2; Gong, Shijun1,2; Jiang, Shuhao1,2; Li, Xiaowei1,2 |
刊名 | IEEE TRANSACTIONS ON COMPUTERS
![]() |
出版日期 | 2019-06-01 |
卷号 | 68期号:6页码:867-881 |
关键词 | Convolutional neural networks accelerator architecture parallelism sparsity |
ISSN号 | 0018-9340 |
DOI | 10.1109/TC.2018.2890258 |
英文摘要 | There are two approaches to improve the performance of Convolutional Neural Networks (CNNs): 1) accelerating computation and 2) reducing the amount of computation. The acceleration approaches take the advantage of CNN computing regularity which enables abundant fine-grained parallelisms in feature maps, neurons, and synapses. Alternatively, reducing computations leverages the intrinsic sparsity of CNN neurons and synapses. The sparsity represents as the computing "bubbles", i. e., zero or tiny-valued neurons and synapses. These bubbles can be removed to reduce the volume of computations. Although distinctly different from each other in principle, we find that the two types of approaches are not orthogonal to each other. Even worse, they may conflict to each other when working together. The conditional branches introduced by some bubble-removing mechanisms in the original computations destroy the regularity of deeply nested loops, thereby impairing the intrinsic parallelisms. Therefore, enabling the synergy between the two types of approaches is critical to arrive at superior performance. This paper proposed a relaxed synchronous computing architecture, FlexFlow-Pro, to fulfill this purpose. Compared with the state-of-the-art accelerators, the FlexFlow-Pro gains more than 2.5 x performance on average and 2x energy efficiency. |
资助项目 | National Natural Science Foundation of China[61572470] ; National Natural Science Foundation of China[61532017] ; National Natural Science Foundation of China[61522406] ; National Natural Science Foundation of China[61872336] ; National Natural Science Foundation of China[61432017] ; National Natural Science Foundation of China[61376043] ; National Natural Science Foundation of China[61521092] ; Youth Innovation Promotion Association, CAS[404441000] |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:000467523100005 |
出版者 | IEEE COMPUTER SOC |
源URL | [http://119.78.100.204/handle/2XEOYT63/4255] ![]() |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Yan, Guihai; Li, Xiaowei |
作者单位 | 1.Univ Chinese Acad Sci, Beijing 100190, Peoples R China 2.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100864, Peoples R China |
推荐引用方式 GB/T 7714 | Wu, Jingya,Lu, Wenyan,Yan, Guihai,et al. Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks[J]. IEEE TRANSACTIONS ON COMPUTERS,2019,68(6):867-881. |
APA | Wu, Jingya.,Lu, Wenyan.,Yan, Guihai.,Li, Jiajun.,Gong, Shijun.,...&Li, Xiaowei.(2019).Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks.IEEE TRANSACTIONS ON COMPUTERS,68(6),867-881. |
MLA | Wu, Jingya,et al."Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks".IEEE TRANSACTIONS ON COMPUTERS 68.6(2019):867-881. |
入库方式: OAI收割
来源:计算技术研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。