中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
BitStream: An efficient framework for inference of binary neural networks on CPUs

文献类型:期刊论文

;
作者Jiang, Yanshu1; Zhao, Tianli1,2; He, Xiangyu2; Leng, Cong2; Cheng, Jian2
刊名PATTERN RECOGNITION LETTERS ; PATTERN RECOGNITION LETTERS
出版日期2019-07-01 ; 2019-07-01
卷号125页码:303-309
ISSN号0167-8655 ; 0167-8655
关键词Convolutional neural networks Convolutional neural networks Binary neural networks Image classification Binary neural networks Image classification
DOI10.1016/j.patrec.2019.04.016 ; 10.1016/j.patrec.2019.04.016
通讯作者Zhao, Tianli(tianli.zhao@nlpr.ia.ac.cn)
英文摘要Convolutional Neural Networks (CNN) has been well-studied and widely used in the field of pattern recognition. Many pattern recognition algorithms need features extracted from CNN models to adapt to complex tasks, such as image classification, object detection, natural language processing and so on. However, to deal with more and more complex tasks, modern CNN models are becoming larger and larger, contain large number of parameters and computation, leading to high consumption of memory, computational and power resources during inference. This makes it difficult to run CNN based applications in real time on mobile devices, where memory, computational and power resources are limited. Binarization of neural networks is proposed to reduce memory and computational complexity of CNN. However, traditional implementations of Binary Neural Networks (BNN) follow the conventional im2col-based convolution computation flow, which is widely used in floating-point networks but not friendly enough to cache when it comes to binarized neural networks. In this paper, we propose BitStream, a general architecture for efficient inference of BNN on CPUs. In BitStream, we propose a simple but novel computation flow for BNN. Unlike existing implementations of BNN, in BitStream, all the layers, including convolutional layers, binarization layers and pooling layers are all calculated in binary precision. Comprehensive analyses demonstrate that our proposed computation flow consumes less memory during inference of BNN, and it's friendly to cache because of its continuous memory access. (C) 2019 Published by Elsevier B.V.;

Convolutional Neural Networks (CNN) has been well-studied and widely used in the field of pattern recognition. Many pattern recognition algorithms need features extracted from CNN models to adapt to complex tasks, such as image classification, object detection, natural language processing and so on. However, to deal with more and more complex tasks, modern CNN models are becoming larger and larger, contain large number of parameters and computation, leading to high consumption of memory, computational and power resources during inference. This makes it difficult to run CNN based applications in real time on mobile devices, where memory, computational and power resources are limited. Binarization of neural networks is proposed to reduce memory and computational complexity of CNN. However, traditional implementations of Binary Neural Networks (BNN) follow the conventional im2col-based convolution computation flow, which is widely used in floating-point networks but not friendly enough to cache when it comes to binarized neural networks. In this paper, we propose BitStream, a general architecture for efficient inference of BNN on CPUs. In BitStream, we propose a simple but novel computation flow for BNN. Unlike existing implementations of BNN, in BitStream, all the layers, including convolutional layers, binarization layers and pooling layers are all calculated in binary precision. Comprehensive analyses demonstrate that our proposed computation flow consumes less memory during inference of BNN, and it's friendly to cache because of its continuous memory access. (C) 2019 Published by Elsevier B.V.

WOS研究方向Computer Science ; Computer Science
语种英语 ; 英语
出版者ELSEVIER ; ELSEVIER
WOS记录号WOS:000482374500042 ; WOS:000482374500042
源URL[http://ir.ia.ac.cn/handle/173211/27269]  
专题类脑芯片与系统研究
通讯作者Zhao, Tianli
作者单位1.Harbin Univ Sci & Technol, Dept Automat, Harbin, Heilongjiang, Peoples R China
2.Chinese Acad Sci, Natl Lab Pattern Recoginit, Inst Automat, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Jiang, Yanshu,Zhao, Tianli,He, Xiangyu,et al. BitStream: An efficient framework for inference of binary neural networks on CPUs, BitStream: An efficient framework for inference of binary neural networks on CPUs[J]. PATTERN RECOGNITION LETTERS, PATTERN RECOGNITION LETTERS,2019, 2019,125, 125:303-309, 303-309.
APA Jiang, Yanshu,Zhao, Tianli,He, Xiangyu,Leng, Cong,&Cheng, Jian.(2019).BitStream: An efficient framework for inference of binary neural networks on CPUs.PATTERN RECOGNITION LETTERS,125,303-309.
MLA Jiang, Yanshu,et al."BitStream: An efficient framework for inference of binary neural networks on CPUs".PATTERN RECOGNITION LETTERS 125(2019):303-309.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。