SynergyFlow: An Elastic Accelerator Architecture Supporting Batch Processing of Large-Scale Deep Neural Networks
文献类型:期刊论文
作者 | Jiang, Shuhao1,2; Wu, Jingya1,2; Li, Xiaowei1,2; Li, Jiajun1,2; Yan, Guihai1,2; Lu, Wenyan1,2; Gong, Shijun1,2 |
刊名 | ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS
![]() |
出版日期 | 2019 |
卷号 | 24期号:1页码:27 |
关键词 | Deep neural networks convolutional neural networks accelerator architecture resource utilization complementary effect |
ISSN号 | 1084-4309 |
DOI | 10.1145/3275243 |
英文摘要 | Neural networks (NNs) have achieved great success in a broad range of applications. As NN-based methods are often both computation and memory intensive, accelerator solutions have been proved to be highly promising in terms of both performance and energy efficiency. Although prior solutions can deliver high computational throughput for convolutional layers, they could incur severe performance degradation when accommodating the entire network model, because there exist very diverse computing and memory bandwidth requirements between convolutional layers and fully connected layers and, furthermore, among different NN models. To overcome this problem, we proposed an elastic accelerator architecture, called SynergyFlow, which intrinsically supports layer-level and model-level parallelism for large-scale deep neural networks. SynergyFlow boosts the resource utilization by exploiting the complementary effect of resource demanding in different layers and different NN models. SynergyFlow can dynamically reconfigure itself according to the workload characteristics, maintaining a high performance and high resource utilization among various models. As a case study, we implement SynergyFlow on a P395-AB FPGA board. Under 100MHz working frequency, our implementation improves the performance by 33.8% on average (up to 67.2% on AlexNet) compared to comparable provisioned previous architectures. |
资助项目 | National Natural Science Foundation of China[61572470] ; National Natural Science Foundation of China[61872336] ; National Natural Science Foundation of China[61532017] ; National Natural Science Foundation of China[61432017] ; National Natural Science Foundation of China[61521092] ; National Natural Science Foundation of China[61376043] ; Youth Innovation Promotion Association, CAS[Y404441000] |
WOS研究方向 | Computer Science |
语种 | 英语 |
WOS记录号 | WOS:000455951700008 |
出版者 | ASSOC COMPUTING MACHINERY |
源URL | [http://119.78.100.204/handle/2XEOYT63/3470] ![]() |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Li, Xiaowei; Yan, Guihai |
作者单位 | 1.Univ Chinese Acad Sci, Beijing, Peoples R China 2.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, 6 Kexueyuan South Rd, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Jiang, Shuhao,Wu, Jingya,Li, Xiaowei,et al. SynergyFlow: An Elastic Accelerator Architecture Supporting Batch Processing of Large-Scale Deep Neural Networks[J]. ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS,2019,24(1):27. |
APA | Jiang, Shuhao.,Wu, Jingya.,Li, Xiaowei.,Li, Jiajun.,Yan, Guihai.,...&Gong, Shijun.(2019).SynergyFlow: An Elastic Accelerator Architecture Supporting Batch Processing of Large-Scale Deep Neural Networks.ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS,24(1),27. |
MLA | Jiang, Shuhao,et al."SynergyFlow: An Elastic Accelerator Architecture Supporting Batch Processing of Large-Scale Deep Neural Networks".ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS 24.1(2019):27. |
入库方式: OAI收割
来源:计算技术研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。