中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
ECBC: Efficient Convolution via Blocked Columnizing

文献类型:期刊论文

;
作者Zhao, Tianli1; Hu, Qinghao2; He, Xiangyu1; Xu, Weixiang2; Wang, Jiaxing2; Leng, Cong2; Cheng, Jian1
刊名IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS ; IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
出版日期2021-07-16 ; 2021-07-16
页码13
关键词Convolution Convolution Tensors Layout Memory management Indexes Transforms Performance evaluation Convolutional neural networks (CNNs) direct convolution high performance computing for mobile devices im2col convolution memory-efficient convolution (MEC) Tensors Layout Memory management Indexes Transforms Performance evaluation Convolutional neural networks (CNNs) direct convolution high performance computing for mobile devices im2col convolution memory-efficient convolution (MEC)
ISSN号2162-237X ; 2162-237X
DOI10.1109/TNNLS.2021.3095276 ; 10.1109/TNNLS.2021.3095276
通讯作者Cheng, Jian(jcheng@nlpr.ia.ac.cn)
英文摘要Direct convolution methods are now drawing increasing attention as they eliminate the additional storage demand required by indirect convolution algorithms (i.e., the transformed matrix generated by the im2col convolution algorithm). Nevertheless, the direct methods require special input-output tensor formatting, leading to extra time and memory consumption to get the desired data layout. In this article, we show that indirect convolution, if implemented properly, is able to achieve high computation performance with the help of highly optimized subroutines in matrix multiplication while avoid incurring substantial memory overhead. The proposed algorithm is called efficient convolution via blocked columnizing (ECBC). Inspired by the im2col convolution algorithm and the block algorithm of general matrix-to-matrix multiplication, we propose to conduct the convolution computation blockwisely. As a result, the tensor-to-matrix transformation process (e.g., the im2col operation) can also be done in a blockwise manner so that it only requires a small block of memory as small as the data block. Extensive experiments on various platforms and networks validate the effectiveness of ECBC, as well as the superiority of our proposed method against a set of widely used industrial-level convolution algorithms.;

Direct convolution methods are now drawing increasing attention as they eliminate the additional storage demand required by indirect convolution algorithms (i.e., the transformed matrix generated by the im2col convolution algorithm). Nevertheless, the direct methods require special input-output tensor formatting, leading to extra time and memory consumption to get the desired data layout. In this article, we show that indirect convolution, if implemented properly, is able to achieve high computation performance with the help of highly optimized subroutines in matrix multiplication while avoid incurring substantial memory overhead. The proposed algorithm is called efficient convolution via blocked columnizing (ECBC). Inspired by the im2col convolution algorithm and the block algorithm of general matrix-to-matrix multiplication, we propose to conduct the convolution computation blockwisely. As a result, the tensor-to-matrix transformation process (e.g., the im2col operation) can also be done in a blockwise manner so that it only requires a small block of memory as small as the data block. Extensive experiments on various platforms and networks validate the effectiveness of ECBC, as well as the superiority of our proposed method against a set of widely used industrial-level convolution algorithms.

资助项目National Natural Science Foundation of China[61972396] ; National Natural Science Foundation of China[61972396] ; National Key Research and Development Program of China[2020AAA0103402] ; Strategic Priority Research Program of the Chinese Academy of Sciences[XDA27040300] ; National Key Research and Development Program of China[2020AAA0103402] ; Strategic Priority Research Program of the Chinese Academy of Sciences[XDA27040300]
WOS研究方向Computer Science ; Computer Science ; Engineering ; Engineering
语种英语 ; 英语
WOS记录号WOS:000732241300001 ; WOS:000732241300001
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC ; IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构National Natural Science Foundation of China ; National Natural Science Foundation of China ; National Key Research and Development Program of China ; Strategic Priority Research Program of the Chinese Academy of Sciences ; National Key Research and Development Program of China ; Strategic Priority Research Program of the Chinese Academy of Sciences
源URL[http://ir.ia.ac.cn/handle/173211/46863]  
专题类脑芯片与系统研究
通讯作者Cheng, Jian
作者单位1.Chinese Acad Sci, Inst Automat, Beijing 100080, Peoples R China
2.Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Zhao, Tianli,Hu, Qinghao,He, Xiangyu,et al. ECBC: Efficient Convolution via Blocked Columnizing, ECBC: Efficient Convolution via Blocked Columnizing[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2021, 2021:13, 13.
APA Zhao, Tianli.,Hu, Qinghao.,He, Xiangyu.,Xu, Weixiang.,Wang, Jiaxing.,...&Cheng, Jian.(2021).ECBC: Efficient Convolution via Blocked Columnizing.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,13.
MLA Zhao, Tianli,et al."ECBC: Efficient Convolution via Blocked Columnizing".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2021):13.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。