Hardware Acceleration of Fully Quantized BERT for Efficient Natural Language Processing
文献类型:会议论文
作者 | Liu, Zejian1,2![]() ![]() ![]() |
出版日期 | 2021-02 |
会议日期 | 2021-2 |
会议地点 | Virtual, Online |
DOI | 10.23919/DATE51398.2021.9474043 |
页码 | 513-516 |
英文摘要 | BERT is the most recent Transformer-based model that achieves state-of-the-art performance in various NLP tasks. In this paper, we investigate the hardware acceleration of BERT on FPGA for edge computing. To tackle the issue of huge computational complexity and memory footprint, we propose to fully quantize the BERT (FQ-BERT), including weights, activations, softmax, layer normalization, and all the intermediate results. Experiments demonstrate that the FQ-BERT can achieve 7.94×compression for weights with negligible performance loss. We then propose an accelerator tailored for the FQ-BERT and evaluate on Xilinx ZCU102 and ZCU111 FPGA. It can achieve a performance-per-watt of 3.18 fps/W, which is 28.91× and 12.72× over Intel(R) Core(TM) i7-8700 CPU and NVIDIA K80 GPU, respectively. |
会议录 | Proceedings of the 2021 Design, Automation and Test in Europe, DATE 2021
![]() |
语种 | 英语 |
URL标识 | 查看原文 |
源URL | [http://ir.ia.ac.cn/handle/173211/52034] ![]() |
专题 | 类脑芯片与系统研究 |
通讯作者 | Cheng, Jian |
作者单位 | 1.School of Future Technology, University of Chinese Academy of Sciences 2.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Liu, Zejian,Li, Gang,Cheng, Jian. Hardware Acceleration of Fully Quantized BERT for Efficient Natural Language Processing[C]. 见:. Virtual, Online. 2021-2. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。