The Devil is in Details: Delving Into Lite FFN Design for Vision Transformers
文献类型:会议论文
作者 | Chen, Zhiyang3,4![]() ![]() ![]() ![]() ![]() ![]() |
出版日期 | 2024-03-18 |
会议日期 | 2024-4-14 |
会议地点 | Seoul, Korea |
关键词 | Vision Transformer Light-Weight Structure Feed-Forward Networks |
英文摘要 | Transformer has demonstrated exceptional performance on a variety of vision tasks. However, its high computational complexity can become problematic. In this paper, we conduct a systematic analysis of the complexity of each component in vision transformers, and identify an easily overlooked detail: that the Feed-Forward Network (FFN) is the primary computational bottleneck, even more so than the Multi-Head Self-Attention (MHSA) mechanism. Inspired by this, we further propose a lightweight FFN module, named SparseFFN, that can reduce dense computations in both channel and spatial dimension. Specifically, SparseFFN consists of two components: Channel-Sparse FFN (CS-FFN) and Spatial-Sparse FFN (SS-FFN), which can be seamlessly incorporated into various vision transformers and even pure MLP models with significantly fewer FLOPs. Extensive experiments demonstrate the effectiveness and efficiency of the proposed method. For example, our approach can reduce model complexity by 23%-39% for most of vision transformers and MLP models while keeping comparable accuracy. |
源URL | [http://ir.ia.ac.cn/handle/173211/56594] ![]() |
专题 | 紫东太初大模型研究中心_大模型计算 |
作者单位 | 1.Wuhan AI Research 2.Peng Cheng Laboratory 3.School of Artificial Intelligence, University of Chinese Academy of Sciences 4.Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Chen, Zhiyang,Zhu, Yousong,Li, Zhaowen,et al. The Devil is in Details: Delving Into Lite FFN Design for Vision Transformers[C]. 见:. Seoul, Korea. 2024-4-14. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。